diff --git a/dev/__pycache__/docs_hooks.cpython-312.pyc b/dev/__pycache__/docs_hooks.cpython-312.pyc index b59e665c1..bfc5d7e38 100644 Binary files a/dev/__pycache__/docs_hooks.cpython-312.pyc and b/dev/__pycache__/docs_hooks.cpython-312.pyc differ diff --git a/dev/basic_usage/index.html b/dev/basic_usage/index.html index b997165a0..a265fc511 100644 --- a/dev/basic_usage/index.html +++ b/dev/basic_usage/index.html @@ -363,6 +363,21 @@ + +
When installing packages globally, you can use the --no-activation
option to prevent the insertion of environment activation code into the installed executable scripts. This means that when you run the installed executable, it won't modify the PATH
or CONDA_PREFIX
environment variables beforehand.
Example:
+ +This option can be useful in scenarios where you want more control over the environment activation or if you're using the installed executables in contexts where automatic activation might interfere with other processes.
You can use pixi in GitHub Actions to install dependencies and run commands. It supports automatic caching of your environments.
-- uses: prefix-dev/setup-pixi@v0.5.1
-- run: pixi run cowpy "Thanks for using pixi"
+
See the GitHub Actions for more details.
diff --git a/dev/reference/cli/index.html b/dev/reference/cli/index.html
index e53e322b1..25664be74 100644
--- a/dev/reference/cli/index.html
+++ b/dev/reference/cli/index.html
@@ -3354,6 +3354,7 @@ Optionspixi global install ruff
# multiple packages can be installed at once
@@ -3371,6 +3372,9 @@ Options
# Install for a specific platform, only useful on osx-arm64
pixi global install --platform osx-64 ruff
+
+# Install without inserting activation code into the executable script
+pixi global install ruff --no-activation
Tip
diff --git a/dev/search/search_index.json b/dev/search/search_index.json index 0a4a33a0b..d32b80a36 100644 --- a/dev/search/search_index.json +++ b/dev/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Getting Started","text":"Pixi is a package management tool for developers. It allows the developer to install libraries and applications in a reproducible way. Use pixi cross-platform, on Windows, Mac and Linux.
"},{"location":"#installation","title":"Installation","text":"To install pixi
you can run the following command in your terminal:
curl -fsSL https://pixi.sh/install.sh | bash\n
The above invocation will automatically download the latest version of pixi
, extract it, and move the pixi
binary to ~/.pixi/bin
. If this directory does not already exist, the script will create it.
The script will also update your ~/.bash_profile
to include ~/.pixi/bin
in your PATH, allowing you to invoke the pixi
command from anywhere.
PowerShell
:
iwr -useb https://pixi.sh/install.ps1 | iex\n
winget
: winget install prefix-dev.pixi\n
The above invocation will automatically download the latest version of pixi
, extract it, and move the pixi
binary to LocalAppData/pixi/bin
. If this directory does not already exist, the script will create it. The command will also automatically add LocalAppData/pixi/bin
to your path allowing you to invoke pixi
from anywhere.
Tip
You might need to restart your terminal or source your shell for the changes to take effect.
You can find more options for the installation script here.
"},{"location":"#autocompletion","title":"Autocompletion","text":"To get autocompletion follow the instructions for your shell. Afterwards, restart the shell or source the shell config file.
"},{"location":"#bash-default-on-most-linux-systems","title":"Bash (default on most Linux systems)","text":"echo 'eval \"$(pixi completion --shell bash)\"' >> ~/.bashrc\n
"},{"location":"#zsh-default-on-macos","title":"Zsh (default on macOS)","text":"echo 'eval \"$(pixi completion --shell zsh)\"' >> ~/.zshrc\n
"},{"location":"#powershell-pre-installed-on-all-windows-systems","title":"PowerShell (pre-installed on all Windows systems)","text":"Add-Content -Path $PROFILE -Value '(& pixi completion --shell powershell) | Out-String | Invoke-Expression'\n
Failure because no profile file exists
Make sure your profile file exists, otherwise create it with:
New-Item -Path $PROFILE -ItemType File -Force\n
"},{"location":"#fish","title":"Fish","text":"echo 'pixi completion --shell fish | source' > ~/.config/fish/completions/pixi.fish\n
"},{"location":"#nushell","title":"Nushell","text":"Add the following to the end of your Nushell env file (find it by running $nu.env-path
in Nushell):
mkdir ~/.cache/pixi\npixi completion --shell nushell | save -f ~/.cache/pixi/completions.nu\n
And add the following to the end of your Nushell configuration (find it by running $nu.config-path
):
use ~/.cache/pixi/completions.nu *\n
"},{"location":"#elvish","title":"Elvish","text":"echo 'eval (pixi completion --shell elvish | slurp)' >> ~/.elvish/rc.elv\n
"},{"location":"#alternative-installation-methods","title":"Alternative installation methods","text":"Although we recommend installing pixi through the above method we also provide additional installation methods.
"},{"location":"#homebrew","title":"Homebrew","text":"Pixi is available via homebrew. To install pixi via homebrew simply run:
brew install pixi\n
"},{"location":"#windows-installer","title":"Windows installer","text":"We provide an msi
installer on our GitHub releases page. The installer will download pixi and add it to the path.
pixi is 100% written in Rust, and therefore it can be installed, built and tested with cargo. To start using pixi from a source build run:
cargo install --locked --git https://github.com/prefix-dev/pixi.git pixi\n
We don't publish to crates.io
anymore, so you need to install it from the repository. The reason for this is that we depend on some unpublished crates which disallows us to publish to crates.io
.
or when you want to make changes use:
cargo build\ncargo test\n
If you have any issues building because of the dependency on rattler
checkout its compile steps.
The installation script has several options that can be manipulated through environment variables.
Variable Description Default ValuePIXI_VERSION
The version of pixi getting installed, can be used to up- or down-grade. latest
PIXI_HOME
The location of the binary folder. $HOME/.pixi
PIXI_ARCH
The architecture the pixi version was built for. uname -m
PIXI_NO_PATH_UPDATE
If set the $PATH
will not be updated to add pixi
to it. TMP_DIR
The temporary directory the script uses to download to and unpack the binary from. /tmp
For example, on Apple Silicon, you can force the installation of the x86 version:
curl -fsSL https://pixi.sh/install.sh | PIXI_ARCH=x86_64 bash\n
Or set the version curl -fsSL https://pixi.sh/install.sh | PIXI_VERSION=v0.18.0 bash\n
The installation script has several options that can be manipulated through environment variables.
Variable Environment variable Description Default ValuePixiVersion
PIXI_VERSION
The version of pixi getting installed, can be used to up- or down-grade. latest
PixiHome
PIXI_HOME
The location of the installation. $Env:USERPROFILE\\.pixi
NoPathUpdate
If set, the $PATH
will not be updated to add pixi
to it. For example, set the version using:
iwr -useb https://pixi.sh/install.ps1 | iex -Args \"-PixiVersion v0.18.0\"\n
"},{"location":"#update","title":"Update","text":"Updating is as simple as installing, rerunning the installation script gets you the latest version.
pixi self-update\n
Or get a specific pixi version using: pixi self-update --version x.y.z\n
Note
If you've used a package manager like brew
, mamba
, conda
, paru
etc. to install pixi
. It's preferable to use the built-in update mechanism. e.g. brew upgrade pixi
.
To uninstall pixi from your system, simply remove the binary.
Linux & macOSWindowsrm ~/.pixi/bin/pixi\n
$PIXI_BIN = \"$Env:LocalAppData\\pixi\\bin\\pixi\"; Remove-Item -Path $PIXI_BIN\n
After this command, you can still use the tools you installed with pixi. To remove these as well, just remove the whole ~/.pixi
directory and remove the directory from your path.
When you want to show your users and contributors that they can use pixi in your repo, you can use the following badge:
[![Pixi Badge](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/prefix-dev/pixi/main/assets/badge/v0.json)](https://pixi.sh)\n
Customize your badge
To further customize the look and feel of your badge, you can add &style=<custom-style>
at the end of the URL. See the documentation on shields.io for more info.
scipy
port using xtensor
conda
, mamba
, poetry
, pip
","text":"Tool Installs python Builds packages Runs predefined tasks Has lock files builtin Fast Use without python Conda \u2705 \u274c \u274c \u274c \u274c \u274c Mamba \u2705 \u274c \u274c \u274c \u2705 \u2705 Pip \u274c \u2705 \u274c \u274c \u274c \u274c Pixi \u2705 \ud83d\udea7 \u2705 \u2705 \u2705 \u2705 Poetry \u274c \u2705 \u274c \u2705 \u274c \u274c"},{"location":"FAQ/#why-the-name-pixi","title":"Why the name pixi
","text":"Starting with the name prefix
we iterated until we had a name that was easy to pronounce, spell and remember. There also wasn't a cli tool yet using that name. Unlike px
, pex
, pax
, etc. We think it sparks curiosity and fun, if you don't agree, I'm sorry, but you can always alias it to whatever you like.
alias not_pixi=\"pixi\"\n
PowerShell:
New-Alias -Name not_pixi -Value pixi\n
"},{"location":"FAQ/#where-is-pixi-build","title":"Where is pixi build
","text":"TL;DR: It's coming we promise!
pixi build
is going to be the subcommand that can generate a conda package out of a pixi project. This requires a solid build tool which we're creating with rattler-build
which will be used as a library in pixi.
Ensure you've got pixi
set up. If running pixi
doesn't show the help, see the getting started if it doesn't.
pixi\n
Initialize a new project and navigate to the project directory.
pixi init pixi-hello-world\ncd pixi-hello-world\n
Add the dependencies you would like to use.
pixi add python\n
Create a file named hello_world.py
in the directory and paste the following code into the file.
def hello():\n print(\"Hello World, to the new revolution in package management.\")\n\nif __name__ == \"__main__\":\n hello()\n
Run the code inside the environment.
pixi run python hello_world.py\n
You can also put this run command in a task.
pixi task add hello python hello_world.py\n
After adding the task, you can run the task using its name.
pixi run hello\n
Use the shell
command to activate the environment and start a new shell in there.
pixi shell\npython\nexit()\n
You've just learned the basic features of pixi:
Feel free to play around with what you just learned like adding more tasks, dependencies or code.
Happy coding!
"},{"location":"basic_usage/#use-pixi-as-a-global-installation-tool","title":"Use pixi as a global installation tool","text":"Use pixi to install tools on your machine.
Some notable examples:
# Awesome cross shell prompt, huge tip when using pixi!\npixi global install starship\n\n# Want to try a different shell?\npixi global install fish\n\n# Install other prefix.dev tools\npixi global install rattler-build\n\n# Install a linter you want to use in multiple projects.\npixi global install ruff\n
"},{"location":"basic_usage/#use-pixi-in-github-actions","title":"Use pixi in GitHub Actions","text":"You can use pixi in GitHub Actions to install dependencies and run commands. It supports automatic caching of your environments.
- uses: prefix-dev/setup-pixi@v0.5.1\n- run: pixi run cowpy \"Thanks for using pixi\"\n
See the GitHub Actions for more details.
"},{"location":"vision/","title":"Vision","text":"We created pixi
because we want to have a cargo/npm/yarn like package management experience for conda. We really love what the conda packaging ecosystem achieves, but we think that the user experience can be improved a lot. Modern package managers like cargo
have shown us, how great a package manager can be. We want to bring that experience to the conda ecosystem.
We want to make pixi a great experience for everyone, so we have a few values that we want to uphold:
We are building on top of the conda packaging ecosystem, this means that we have a huge number of packages available for different platforms on conda-forge. We believe the conda packaging ecosystem provides a solid base to manage your dependencies. Conda-forge is community maintained and very open to contributions. It is widely used in data science and scientific computing, robotics and other fields. And has a proven track record.
"},{"location":"vision/#target-languages","title":"Target languages","text":"Essentially, we are language agnostics, we are targeting any language that can be installed with conda. Including: C++, Python, Rust, Zig etc. But we do believe the python ecosystem can benefit from a good package manager that is based on conda. So we are trying to provide an alternative to existing solutions there. We also think we can provide a good solution for C++ projects, as there are a lot of libraries available on conda-forge today. Pixi also truly shines when using it for multi-language projects e.g. a mix of C++ and Python, because we provide a nice way to build everything up to and including system level packages.
"},{"location":"advanced/authentication/","title":"Authenticate pixi with a server","text":"You can authenticate pixi with a server like prefix.dev, a private quetz instance or anaconda.org. Different servers use different authentication methods. In this documentation page, we detail how you can authenticate against the different servers and where the authentication information is stored.
Usage: pixi auth login [OPTIONS] <HOST>\n\nArguments:\n <HOST> The host to authenticate with (e.g. repo.prefix.dev)\n\nOptions:\n --token <TOKEN> The token to use (for authentication with prefix.dev)\n --username <USERNAME> The username to use (for basic HTTP authentication)\n --password <PASSWORD> The password to use (for basic HTTP authentication)\n --conda-token <CONDA_TOKEN> The token to use on anaconda.org / quetz authentication\n -v, --verbose... More output per occurrence\n -q, --quiet... Less output per occurrence\n -h, --help Print help\n
The different options are \"token\", \"conda-token\" and \"username + password\".
The token variant implements a standard \"Bearer Token\" authentication as is used on the prefix.dev platform. A Bearer Token is sent with every request as an additional header of the form Authentication: Bearer <TOKEN>
.
The conda-token option is used on anaconda.org and can be used with a quetz server. With this option, the token is sent as part of the URL following this scheme: conda.anaconda.org/t/<TOKEN>/conda-forge/linux-64/...
.
The last option, username & password, are used for \"Basic HTTP Authentication\". This is the equivalent of adding http://user:password@myserver.com/...
. This authentication method can be configured quite easily with a reverse NGinx or Apache server and is thus commonly used in self-hosted systems.
Login to prefix.dev:
pixi auth login prefix.dev --token pfx_jj8WDzvnuTHEGdAhwRZMC1Ag8gSto8\n
Login to anaconda.org:
pixi auth login anaconda.org --conda-token xy-72b914cc-c105-4ec7-a969-ab21d23480ed\n
Login to a basic HTTP secured server:
pixi auth login myserver.com --username user --password password\n
"},{"location":"advanced/authentication/#where-does-pixi-store-the-authentication-information","title":"Where does pixi store the authentication information?","text":"The storage location for the authentication information is system-dependent. By default, pixi tries to use the keychain to store this sensitive information securely on your machine.
On Windows, the credentials are stored in the \"credentials manager\". Searching for rattler
(the underlying library pixi uses) you should find any credentials stored by pixi (or other rattler-based programs).
On macOS, the passwords are stored in the keychain. To access the password, you can use the Keychain Access
program that comes pre-installed on macOS. Searching for rattler
(the underlying library pixi uses) you should find any credentials stored by pixi (or other rattler-based programs).
On Linux, one can use GNOME Keyring
(or just Keyring) to access credentials that are securely stored by libsecret
. Searching for rattler
should list all the credentials stored by pixi and other rattler-based programs.
If you run on a server with none of the aforementioned keychains available, then pixi falls back to store the credentials in an insecure JSON file. This JSON file is located at ~/.rattler/credentials.json
and contains the credentials.
You can use the RATTLER_AUTH_FILE
environment variable to override the default location of the credentials file. When this environment variable is set, it provides the only source of authentication data that is used by pixi.
E.g.
export RATTLER_AUTH_FILE=$HOME/credentials.json\n# You can also specify the file in the command line\npixi global install --auth-file $HOME/credentials.json ...\n
The JSON should follow the following format:
{\n \"*.prefix.dev\": {\n \"BearerToken\": \"your_token\"\n },\n \"otherhost.com\": {\n \"BasicHTTP\": {\n \"username\": \"your_username\",\n \"password\": \"your_password\"\n }\n },\n \"conda.anaconda.org\": {\n \"CondaToken\": \"your_token\"\n }\n}\n
Note: if you use a wildcard in the host, any subdomain will match (e.g. *.prefix.dev
also matches repo.prefix.dev
).
Lastly you can set the authentication override file in the global configuration file.
"},{"location":"advanced/authentication/#pypi-authentication","title":"PyPI authentication","text":"Currently, we support the following methods for authenticating against PyPI:
.netrc
file authentication.We want to add more methods in the future, so if you have a specific method you would like to see, please let us know.
"},{"location":"advanced/authentication/#keyring-authentication","title":"Keyring authentication","text":"Currently, pixi supports the uv method of authentication through the python keyring library. To enable this use the CLI flag --pypi-keyring-provider
which can either be set to subprocess
(activated) or disabled
.
# From an existing pixi project\npixi install --pypi-keyring-provider subprocess\n
This option can also be set in the global configuration file under pypi-config.
"},{"location":"advanced/authentication/#installing-keyring","title":"Installing keyring","text":"To install keyring you can use pixi global install:
Either use:
pixi global install keyring\n
GCP and other backends The downside of this method is currently, because you cannot inject into a pixi global environment just yet, that installing different keyring backends is not possible. This allows only the default keyring backend to be used. Give the issue a \ud83d\udc4d up if you would like to see inject as a feature.
Or alternatively, you can install keyring using pipx:
# Install pipx if you haven't already\npixi global install pipx\npipx install keyring\n\n# For Google Artifact Registry, also install and initialize its keyring backend.\n# Inject this into the pipx environment\npipx inject keyring keyrings.google-artifactregistry-auth --index-url https://pypi.org/simple\ngcloud auth login\n
"},{"location":"advanced/authentication/#using-keyring-with-basic-auth","title":"Using keyring with Basic Auth","text":"Use keyring to store your credentials e.g:
keyring set https://my-index/simple your_username\n# prompt will appear for your password\n
"},{"location":"advanced/authentication/#configuration","title":"Configuration","text":"Make sure to include username@
in the URL of the registry. An example of this would be:
[pypi-options]\nindex-url = \"https://username@custom-registry.com/simple\"\n
"},{"location":"advanced/authentication/#gcp","title":"GCP","text":"For Google Artifact Registry, you can use the Google Cloud SDK to authenticate. Make sure to have run gcloud auth login
before using pixi. Another thing to note is that you need to add oauth2accesstoken
to the URL of the registry. An example of this would be:
# rest of the pixi.toml\n#\n# Add's the following options to the default feature\n[pypi-options]\nextra-index-urls = [\"https://oauth2accesstoken@<location>-python.pkg.dev/<project>/<repository>/simple\"]\n
Note
Include the /simple
at the end, replace the <location>
etc. with your project and repository and location.
To find this URL more easily, you can use the gcloud
command:
gcloud artifacts print-settings python --project=<project> --repository=<repository> --location=<location>\n
"},{"location":"advanced/authentication/#azure-devops","title":"Azure DevOps","text":"Similarly for Azure DevOps, you can use the Azure keyring backend for authentication. The backend, along with installation instructions can be found at keyring.artifacts.
After following the instructions and making sure that keyring works correctly, you can use the following configuration:
"},{"location":"advanced/authentication/#configuration_2","title":"Configuration","text":"# rest of the pixi.toml\n#\n# Adds the following options to the default feature\n[pypi-options]\nextra-index-urls = [\"https://VssSessionToken@pkgs.dev.azure.com/{organization}/{project}/_packaging/{feed}/pypi/simple/\"]\n
This should allow for getting packages from the Azure DevOps artifact registry."},{"location":"advanced/authentication/#installing-your-environment","title":"Installing your environment","text":"To actually install either configure your Global Config, or use the flag:
pixi install --pypi-keyring-provider subprocess\n
"},{"location":"advanced/authentication/#netrc-file","title":".netrc
file","text":"pixi
allows you to access private registries securely by authenticating with credentials stored in a .netrc
file.
.netrc
file can be stored in your home directory ($HOME/.netrc
for Unix-like systems)%HOME%\\_netrc
).NETRC
variable (export NETRC=/my/custom/location/.netrc
). e.g export NETRC=/my/custom/location/.netrc pixi install
In the .netrc
file, you store authentication details like this:
machine registry-name\nlogin admin\npassword admin\n
For more details, you can access the .netrc docs."},{"location":"advanced/channel_priority/","title":"Channel Logic","text":"All logic regarding the decision which dependencies can be installed from which channel is done by the instruction we give the solver.
The actual code regarding this is in the rattler_solve
crate. This might however be hard to read. Therefore, this document will continue with simplified flow charts.
When a user defines a channel per dependency, the solver needs to know the other channels are unusable for this dependency.
[project]\nchannels = [\"conda-forge\", \"my-channel\"]\n\n[dependencies]\npackgex = { version = \"*\", channel = \"my-channel\" }\n
In the packagex
example, the solver will understand that the package is only available in my-channel
and will not look for it in conda-forge
. The flowchart of the logic that excludes all other channels:
flowchart TD\n A[Start] --> B[Given a Dependency]\n B --> C{Channel Specific Dependency?}\n C -->|Yes| D[Exclude All Other Channels for This Package]\n C -->|No| E{Any Other Dependencies?}\n E -->|Yes| B\n E -->|No| F[End]\n D --> E
"},{"location":"advanced/channel_priority/#channel-priority","title":"Channel priority","text":"Channel priority is dictated by the order in the project.channels
array, where the first channel is the highest priority. For instance:
[project]\nchannels = [\"conda-forge\", \"my-channel\", \"your-channel\"]\n
If the package is found in conda-forge
the solver will not look for it in my-channel
and your-channel
, because it tells the solver they are excluded. If the package is not found in conda-forge
the solver will look for it in my-channel
and if it is found there it will tell the solver to exclude your-channel
for this package. This diagram explains the logic: flowchart TD\n A[Start] --> B[Given a Dependency]\n B --> C{Loop Over Channels}\n C --> D{Package in This Channel?}\n D -->|No| C\n D -->|Yes| E{\"This the first channel\n for this package?\"}\n E -->|Yes| F[Include Package in Candidates]\n E -->|No| G[Exclude Package from Candidates]\n F --> H{Any Other Channels?}\n G --> H\n H -->|Yes| C\n H -->|No| I{Any Other Dependencies?}\n I -->|No| J[End]\n I -->|Yes| B
This method ensures the solver only adds a package to the candidates if it's found in the highest priority channel available. If you have 10 channels and the package is found in the 5th channel it will exclude the next 5 channels from the candidates if they also contain the package.
"},{"location":"advanced/channel_priority/#use-case-pytorch-and-nvidia-with-conda-forge","title":"Use case: pytorch and nvidia with conda-forge","text":"A common use case is to use pytorch
with nvidia
drivers, while also needing the conda-forge
channel for the main dependencies.
[project]\nchannels = [\"nvidia/label/cuda-11.8.0\", \"nvidia\", \"conda-forge\", \"pytorch\"]\nplatforms = [\"linux-64\"]\n\n[dependencies]\ncuda = {version = \"*\", channel=\"nvidia/label/cuda-11.8.0\"}\npytorch = {version = \"2.0.1.*\", channel=\"pytorch\"}\ntorchvision = {version = \"0.15.2.*\", channel=\"pytorch\"}\npytorch-cuda = {version = \"11.8.*\", channel=\"pytorch\"}\npython = \"3.10.*\"\n
What this will do is get as much as possible from the nvidia/label/cuda-11.8.0
channel, which is actually only the cuda
package. Then it will get all packages from the nvidia
channel, which is a little more and some packages overlap the nvidia
and conda-forge
channel. Like the cuda-cudart
package, which will now only be retrieved from the nvidia
channel because of the priority logic.
Then it will get the packages from the conda-forge
channel, which is the main channel for the dependencies.
But the user only wants the pytorch packages from the pytorch
channel, which is why pytorch
is added last and the dependencies are added as channel specific dependencies.
We don't define the pytorch
channel before conda-forge
because we want to get as much as possible from the conda-forge
as the pytorch channel is not always shipping the best versions of all packages.
For example, it also ships the ffmpeg
package, but only an old version which doesn't work with the newer pytorch versions. Thus breaking the installation if we would skip the conda-forge
channel for ffmpeg
with the priority logic.
If you want to force a specific priority for a channel, you can use the priority
(int) key in the channel definition. The higher the number, the higher the priority. Non specified priorities are set to 0 but the index in the array still counts as a priority, where the first in the list has the highest priority.
This priority definition is mostly important for multiple environments with different channel priorities, as by default feature channels are prepended to the project channels.
[project]\nname = \"test_channel_priority\"\nplatforms = [\"linux-64\", \"osx-64\", \"win-64\", \"osx-arm64\"]\nchannels = [\"conda-forge\"]\n\n[feature.a]\nchannels = [\"nvidia\"]\n\n[feature.b]\nchannels = [ \"pytorch\", {channel = \"nvidia\", priority = 1}]\n\n[feature.c]\nchannels = [ \"pytorch\", {channel = \"nvidia\", priority = -1}]\n\n[environments]\na = [\"a\"]\nb = [\"b\"]\nc = [\"c\"]\n
This example creates 4 environments, a
, b
, c
, and the default environment. Which will have the following channel order: Environment Resulting Channels order default conda-forge
a nvidia
, conda-forge
b nvidia
, pytorch
, conda-forge
c pytorch
, conda-forge
, nvidia
Check priority result with pixi info
Using pixi info
you can check the priority of the channels in the environment.
pixi info\nEnvironments\n------------\n Environment: default\n Features: default\n Channels: conda-forge\nDependency count: 0\nTarget platforms: linux-64\n\n Environment: a\n Features: a, default\n Channels: nvidia, conda-forge\nDependency count: 0\nTarget platforms: linux-64\n\n Environment: b\n Features: b, default\n Channels: nvidia, pytorch, conda-forge\nDependency count: 0\nTarget platforms: linux-64\n\n Environment: c\n Features: c, default\n Channels: pytorch, conda-forge, nvidia\nDependency count: 0\nTarget platforms: linux-64\n
"},{"location":"advanced/explain_info_command/","title":"Info command","text":"pixi info
prints out useful information to debug a situation or to get an overview of your machine/project. This information can also be retrieved in json
format using the --json
flag, which can be useful for programmatically reading it.
\u279c pixi info\n Pixi version: 0.13.0\n Platform: linux-64\n Virtual packages: __unix=0=0\n : __linux=6.5.12=0\n : __glibc=2.36=0\n : __cuda=12.3=0\n : __archspec=1=x86_64\n Cache dir: /home/user/.cache/rattler/cache\n Auth storage: /home/user/.rattler/credentials.json\n\nProject\n------------\n Version: 0.13.0\n Manifest file: /home/user/development/pixi/pixi.toml\n Last updated: 25-01-2024 10:29:08\n\nEnvironments\n------------\ndefault\n Features: default\n Channels: conda-forge\n Dependency count: 10\n Dependencies: pre-commit, rust, openssl, pkg-config, git, mkdocs, mkdocs-material, pillow, cairosvg, compilers\n Target platforms: linux-64, osx-arm64, win-64, osx-64\n Tasks: docs, test-all, test, build, lint, install, build-docs\n
"},{"location":"advanced/explain_info_command/#global-info","title":"Global info","text":"The first part of the info output is information that is always available and tells you what pixi can read on your machine.
"},{"location":"advanced/explain_info_command/#platform","title":"Platform","text":"This defines the platform you're currently on according to pixi. If this is incorrect, please file an issue on the pixi repo.
"},{"location":"advanced/explain_info_command/#virtual-packages","title":"Virtual packages","text":"The virtual packages that pixi can find on your machine.
In the Conda ecosystem, you can depend on virtual packages. These packages aren't real dependencies that are going to be installed, but rather are being used in the solve step to find if a package can be installed on the machine. A simple example: When a package depends on Cuda drivers being present on the host machine it can do that by depending on the __cuda
virtual package. In that case, if pixi cannot find the __cuda
virtual package on your machine the installation will fail.
The directory where pixi stores its cache. Checkout the cache documentation for more information.
"},{"location":"advanced/explain_info_command/#auth-storage","title":"Auth storage","text":"Check the authentication documentation
"},{"location":"advanced/explain_info_command/#cache-size","title":"Cache size","text":"[requires --extended
]
The size of the previously mentioned \"Cache dir\" in Mebibytes.
"},{"location":"advanced/explain_info_command/#project-info","title":"Project info","text":"Everything below Project
is info about the project you're currently in. This info is only available if your path has a manifest file.
The path to the manifest file that describes the project.
"},{"location":"advanced/explain_info_command/#last-updated","title":"Last updated","text":"The last time the lock file was updated, either manually or by pixi itself.
"},{"location":"advanced/explain_info_command/#environment-info","title":"Environment info","text":"The environment info defined per environment. If you don't have any environments defined, this will only show the default
environment.
This lists which features are enabled in the environment. For the default this is only default
The list of channels used in this environment.
"},{"location":"advanced/explain_info_command/#dependency-count","title":"Dependency count","text":"The amount of dependencies defined that are defined for this environment (not the amount of installed dependencies).
"},{"location":"advanced/explain_info_command/#dependencies","title":"Dependencies","text":"The list of dependencies defined for this environment.
"},{"location":"advanced/explain_info_command/#target-platforms","title":"Target platforms","text":"The platforms the project has defined.
"},{"location":"advanced/github_actions/","title":"GitHub Action","text":"We created prefix-dev/setup-pixi to facilitate using pixi in CI.
"},{"location":"advanced/github_actions/#usage","title":"Usage","text":"- uses: prefix-dev/setup-pixi@v0.8.0\n with:\n pixi-version: v0.30.0\n cache: true\n auth-host: prefix.dev\n auth-token: ${{ secrets.PREFIX_DEV_TOKEN }}\n- run: pixi run test\n
Pin your action versions
Since pixi is not yet stable, the API of this action may change between minor versions. Please pin the versions of this action to a specific version (i.e., prefix-dev/setup-pixi@v0.8.0
) to avoid breaking changes. You can automatically update the version of this action by using Dependabot.
Put the following in your .github/dependabot.yml
file to enable Dependabot for your GitHub Actions:
version: 2\nupdates:\n - package-ecosystem: github-actions\n directory: /\n schedule:\n interval: monthly # (1)!\n groups:\n dependencies:\n patterns:\n - \"*\"\n
daily
, weekly
To see all available input arguments, see the action.yml
file in setup-pixi
. The most important features are described below.
The action supports caching of the pixi environment. By default, caching is enabled if a pixi.lock
file is present. It will then use the pixi.lock
file to generate a hash of the environment and cache it. If the cache is hit, the action will skip the installation and use the cached environment. You can specify the behavior by setting the cache
input argument.
Customize your cache key
If you need to customize your cache-key, you can use the cache-key
input argument. This will be the prefix of the cache key. The full cache key will be <cache-key><conda-arch>-<hash>
.
Only save caches on main
In order to not exceed the 10 GB cache size limit as fast, you might want to restrict when the cache is saved. This can be done by setting the cache-write
argument.
- uses: prefix-dev/setup-pixi@v0.8.0\n with:\n cache: true\n cache-write: ${{ github.event_name == 'push' && github.ref_name == 'main' }}\n
"},{"location":"advanced/github_actions/#multiple-environments","title":"Multiple environments","text":"With pixi, you can create multiple environments for different requirements. You can also specify which environment(s) you want to install by setting the environments
input argument. This will install all environments that are specified and cache them.
[project]\nname = \"my-package\"\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\"]\n\n[dependencies]\npython = \">=3.11\"\npip = \"*\"\npolars = \">=0.14.24,<0.21\"\n\n[feature.py311.dependencies]\npython = \"3.11.*\"\n[feature.py312.dependencies]\npython = \"3.12.*\"\n\n[environments]\npy311 = [\"py311\"]\npy312 = [\"py312\"]\n
"},{"location":"advanced/github_actions/#multiple-environments-using-a-matrix","title":"Multiple environments using a matrix","text":"The following example will install the py311
and py312
environments in different jobs.
test:\n runs-on: ubuntu-latest\n strategy:\n matrix:\n environment: [py311, py312]\n steps:\n - uses: actions/checkout@v4\n - uses: prefix-dev/setup-pixi@v0.8.0\n with:\n environments: ${{ matrix.environment }}\n
"},{"location":"advanced/github_actions/#install-multiple-environments-in-one-job","title":"Install multiple environments in one job","text":"The following example will install both the py311
and the py312
environment on the runner.
- uses: prefix-dev/setup-pixi@v0.8.0\n with:\n environments: >- # (1)!\n py311\n py312\n- run: |\n pixi run -e py311 test\n pixi run -e py312 test\n
separated by spaces, equivalent to
environments: py311 py312\n
Caching behavior if you don't specify environments
If you don't specify any environment, the default
environment will be installed and cached, even if you use other environments.
There are currently three ways to authenticate with pixi:
For more information, see Authentication.
Handle secrets with care
Please only store sensitive information using GitHub secrets. Do not store them in your repository. When your sensitive information is stored in a GitHub secret, you can access it using the ${{ secrets.SECRET_NAME }}
syntax. These secrets will always be masked in the logs.
Specify the token using the auth-token
input argument. This form of authentication (bearer token in the request headers) is mainly used at prefix.dev.
- uses: prefix-dev/setup-pixi@v0.8.0\n with:\n auth-host: prefix.dev\n auth-token: ${{ secrets.PREFIX_DEV_TOKEN }}\n
"},{"location":"advanced/github_actions/#username-and-password","title":"Username and password","text":"Specify the username and password using the auth-username
and auth-password
input arguments. This form of authentication (HTTP Basic Auth) is used in some enterprise environments with artifactory for example.
- uses: prefix-dev/setup-pixi@v0.8.0\n with:\n auth-host: custom-artifactory.com\n auth-username: ${{ secrets.PIXI_USERNAME }}\n auth-password: ${{ secrets.PIXI_PASSWORD }}\n
"},{"location":"advanced/github_actions/#conda-token","title":"Conda-token","text":"Specify the conda-token using the conda-token
input argument. This form of authentication (token is encoded in URL: https://my-quetz-instance.com/t/<token>/get/custom-channel
) is used at anaconda.org or with quetz instances.
- uses: prefix-dev/setup-pixi@v0.8.0\n with:\n auth-host: anaconda.org # (1)!\n conda-token: ${{ secrets.CONDA_TOKEN }}\n
setup-pixi
allows you to run command inside of the pixi environment by specifying a custom shell wrapper with shell: pixi run bash -e {0}
. This can be useful if you want to run commands inside of the pixi environment, but don't want to use the pixi run
command for each command.
- run: | # (1)!\n python --version\n pip install --no-deps -e .\n shell: pixi run bash -e {0}\n
You can even run Python scripts like this:
- run: | # (1)!\n import my_package\n print(\"Hello world!\")\n shell: pixi run python {0}\n
If you want to use PowerShell, you need to specify -Command
as well.
- run: | # (1)!\n python --version | Select-String \"3.11\"\n shell: pixi run pwsh -Command {0} # pwsh works on all platforms\n
How does it work under the hood?
Under the hood, the shell: xyz {0}
option is implemented by creating a temporary script file and calling xyz
with that script file as an argument. This file does not have the executable bit set, so you cannot use shell: pixi run {0}
directly but instead have to use shell: pixi run bash {0}
. There are some custom shells provided by GitHub that have slightly different behavior, see jobs.<job_id>.steps[*].shell
in the documentation. See the official documentation and ADR 0277 for more information about how the shell:
input works in GitHub Actions.
pixi exec
","text":"With pixi exec
, you can also run a one-off command inside a temporary pixi environment.
- run: | # (1)!\n zstd --version\n shell: pixi exec --spec zstd -- bash -e {0}\n
- run: | # (1)!\n import ruamel.yaml\n # ...\n shell: pixi exec --spec python=3.11.* --spec ruamel.yaml -- python {0}\n
See here for more information about pixi exec
.
Instead of using a custom shell wrapper, you can also make all pixi-installed binaries available to subsequent steps by \"activating\" the installed environment in the currently running job. To this end, setup-pixi
adds all environment variables set when executing pixi run
to $GITHUB_ENV
and, similarly, adds all path modifications to $GITHUB_PATH
. As a result, all installed binaries can be accessed without having to call pixi run
.
- uses: prefix-dev/setup-pixi@v0.8.0\n with:\n activate-environment: true\n
If you are installing multiple environments, you will need to specify the name of the environment that you want to be activated.
- uses: prefix-dev/setup-pixi@v0.8.0\n with:\n environments: >-\n py311\n py312\n activate-environment: py311\n
Activating an environment may be more useful than using a custom shell wrapper as it allows non-shell based steps to access binaries on the path. However, be aware that this option augments the environment of your job.
"},{"location":"advanced/github_actions/#-frozen-and-locked","title":"--frozen
and --locked
","text":"You can specify whether setup-pixi
should run pixi install --frozen
or pixi install --locked
depending on the frozen
or the locked
input argument. See the official documentation for more information about the --frozen
and --locked
flags.
- uses: prefix-dev/setup-pixi@v0.8.0\n with:\n locked: true\n # or\n frozen: true\n
If you don't specify anything, the default behavior is to run pixi install --locked
if a pixi.lock
file is present and pixi install
otherwise.
There are two types of debug logging that you can enable.
"},{"location":"advanced/github_actions/#debug-logging-of-the-action","title":"Debug logging of the action","text":"The first one is the debug logging of the action itself. This can be enabled by for the action by re-running the action in debug mode:
Debug logging documentation
For more information about debug logging in GitHub Actions, see the official documentation.
"},{"location":"advanced/github_actions/#debug-logging-of-pixi","title":"Debug logging of pixi","text":"The second type is the debug logging of the pixi executable. This can be specified by setting the log-level
input.
- uses: prefix-dev/setup-pixi@v0.8.0\n with:\n log-level: vvv # (1)!\n
q
, default
, v
, vv
, or vvv
.If nothing is specified, log-level
will default to default
or vv
depending on if debug logging is enabled for the action.
On self-hosted runners, it may happen that some files are persisted between jobs. This can lead to problems or secrets getting leaked between job runs. To avoid this, you can use the post-cleanup
input to specify the post cleanup behavior of the action (i.e., what happens after all your commands have been executed).
If you set post-cleanup
to true
, the action will delete the following files:
.pixi
environment~/.rattler
If nothing is specified, post-cleanup
will default to true
.
On self-hosted runners, you also might want to alter the default pixi install location to a temporary location. You can use pixi-bin-path: ${{ runner.temp }}/bin/pixi
to do this.
- uses: prefix-dev/setup-pixi@v0.8.0\n with:\n post-cleanup: true\n pixi-bin-path: ${{ runner.temp }}/bin/pixi # (1)!\n
${{ runner.temp }}\\Scripts\\pixi.exe
on WindowsYou can also use a preinstalled local version of pixi on the runner by not setting any of the pixi-version
, pixi-url
or pixi-bin-path
inputs. This action will then try to find a local version of pixi in the runner's PATH.
pyproject.toml
as a manifest file for pixi.","text":"setup-pixi
will automatically pick up the pyproject.toml
if it contains a [tool.pixi.project]
section and no pixi.toml
. This can be overwritten by setting the manifest-path
input argument.
- uses: prefix-dev/setup-pixi@v0.8.0\n with:\n manifest-path: pyproject.toml\n
"},{"location":"advanced/github_actions/#more-examples","title":"More examples","text":"If you want to see more examples, you can take a look at the GitHub Workflows of the setup-pixi
repository.
pyproject.toml
in pixi","text":"We support the use of the pyproject.toml
as our manifest file in pixi. This allows the user to keep one file with all configuration. The pyproject.toml
file is a standard for Python projects. We don't advise to use the pyproject.toml
file for anything else than python projects, the pixi.toml
is better suited for other types of projects.
pyproject.toml
file","text":"When you already have a pyproject.toml
file in your project, you can run pixi init
in a that folder. Pixi will automatically
[tool.pixi.project]
section to the file, with the platform and channel information required by pixi;.gitignore
and .gitattributes
files.If you do not have an existing pyproject.toml
file , you can run pixi init --format pyproject
in your project folder. In that case, pixi will create a pyproject.toml
manifest from scratch with some sane defaults.
The pyproject.toml
file supports the requires_python
field. Pixi understands that field and automatically adds the version to the dependencies.
This is an example of a pyproject.toml
file with the requires_python
field, which will be used as the python dependency:
[project]\nname = \"my_project\"\nrequires-python = \">=3.9\"\n\n[tool.pixi.project]\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\", \"osx-arm64\", \"osx-64\", \"win-64\"]\n
Which is equivalent to:
equivalent pixi.toml[project]\nname = \"my_project\"\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\", \"osx-arm64\", \"osx-64\", \"win-64\"]\n\n[dependencies]\npython = \">=3.9\"\n
"},{"location":"advanced/pyproject_toml/#dependency-section","title":"Dependency section","text":"The pyproject.toml
file supports the dependencies
field. Pixi understands that field and automatically adds the dependencies to the project as [pypi-dependencies]
.
This is an example of a pyproject.toml
file with the dependencies
field:
[project]\nname = \"my_project\"\nrequires-python = \">=3.9\"\ndependencies = [\n \"numpy\",\n \"pandas\",\n \"matplotlib\",\n]\n\n[tool.pixi.project]\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\", \"osx-arm64\", \"osx-64\", \"win-64\"]\n
Which is equivalent to:
equivalent pixi.toml[project]\nname = \"my_project\"\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\", \"osx-arm64\", \"osx-64\", \"win-64\"]\n\n[pypi-dependencies]\nnumpy = \"*\"\npandas = \"*\"\nmatplotlib = \"*\"\n\n[dependencies]\npython = \">=3.9\"\n
You can overwrite these with conda dependencies by adding them to the dependencies
field:
[project]\nname = \"my_project\"\nrequires-python = \">=3.9\"\ndependencies = [\n \"numpy\",\n \"pandas\",\n \"matplotlib\",\n]\n\n[tool.pixi.project]\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\", \"osx-arm64\", \"osx-64\", \"win-64\"]\n\n[tool.pixi.dependencies]\nnumpy = \"*\"\npandas = \"*\"\nmatplotlib = \"*\"\n
This would result in the conda dependencies being installed and the pypi dependencies being ignored. As pixi takes the conda dependencies over the pypi dependencies.
"},{"location":"advanced/pyproject_toml/#optional-dependencies","title":"Optional dependencies","text":"If your python project includes groups of optional dependencies, pixi will automatically interpret them as pixi features of the same name with the associated pypi-dependencies
.
You can add them to pixi environments manually, or use pixi init
to setup the project, which will create one environment per feature. Self-references to other groups of optional dependencies are also handled.
For instance, imagine you have a project folder with a pyproject.toml
file similar to:
[project]\nname = \"my_project\"\ndependencies = [\"package1\"]\n\n[project.optional-dependencies]\ntest = [\"pytest\"]\nall = [\"package2\",\"my_project[test]\"]\n
Running pixi init
in that project folder will transform the pyproject.toml
file into:
[project]\nname = \"my_project\"\ndependencies = [\"package1\"]\n\n[project.optional-dependencies]\ntest = [\"pytest\"]\nall = [\"package2\",\"my_project[test]\"]\n\n[tool.pixi.project]\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\"] # if executed on linux\n\n[tool.pixi.environments]\ndefault = {features = [], solve-group = \"default\"}\ntest = {features = [\"test\"], solve-group = \"default\"}\nall = {features = [\"all\", \"test\"], solve-group = \"default\"}\n
In this example, three environments will be created by pixi:
All environments will be solved together, as indicated by the common solve-group
, and added to the lock file. You can edit the [tool.pixi.environments]
section manually to adapt it to your use case (e.g. if you do not need a particular environment).
As the pyproject.toml
file supports the full pixi spec with [tool.pixi]
prepended an example would look like this:
[project]\nname = \"my_project\"\nrequires-python = \">=3.9\"\ndependencies = [\n \"numpy\",\n \"pandas\",\n \"matplotlib\",\n \"ruff\",\n]\n\n[tool.pixi.project]\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\", \"osx-arm64\", \"osx-64\", \"win-64\"]\n\n[tool.pixi.dependencies]\ncompilers = \"*\"\ncmake = \"*\"\n\n[tool.pixi.tasks]\nstart = \"python my_project/main.py\"\nlint = \"ruff lint\"\n\n[tool.pixi.system-requirements]\ncuda = \"11.0\"\n\n[tool.pixi.feature.test.dependencies]\npytest = \"*\"\n\n[tool.pixi.feature.test.tasks]\ntest = \"pytest\"\n\n[tool.pixi.environments]\ntest = [\"test\"]\n
"},{"location":"advanced/pyproject_toml/#build-system-section","title":"Build-system section","text":"The pyproject.toml
file normally contains a [build-system]
section. Pixi will use this section to build and install the project if it is added as a pypi path dependency.
If the pyproject.toml
file does not contain any [build-system]
section, pixi will fall back to uv's default, which is equivalent to the below:
[build-system]\nrequires = [\"setuptools >= 40.8.0\"]\nbuild-backend = \"setuptools.build_meta:__legacy__\"\n
Including a [build-system]
section is highly recommended. If you are not sure of the build-backend you want to use, including the [build-system]
section below in your pyproject.toml
is a good starting point. pixi init --format pyproject
defaults to hatchling
. The advantages of hatchling
over setuptools
are outlined on its website.
[build-system]\nbuild-backend = \"hatchling.build\"\nrequires = [\"hatchling\"]\n
"},{"location":"advanced/updates_github_actions/","title":"Update lockfiles with GitHub Actions","text":"You can leverage GitHub Actions in combination with pavelzw/pixi-diff-to-markdown to automatically update your lockfiles similar to dependabot or renovate in other ecosystems.
Dependabot/Renovate support for pixi
You can track native Dependabot support for pixi in dependabot/dependabot-core #2227 and for Renovate in renovatebot/renovate #2213.
"},{"location":"advanced/updates_github_actions/#how-to-use","title":"How to use","text":"To get started, create a new GitHub Actions workflow file in your repository.
.github/workflows/update-lockfiles.ymlname: Update lockfiles\n\npermissions: # (1)!\n contents: write\n pull-requests: write\n\non:\n workflow_dispatch:\n schedule:\n - cron: 0 5 1 * * # (2)!\n\njobs:\n pixi-update:\n runs-on: ubuntu-latest\n steps:\n - uses: actions/checkout@v4\n - name: Set up pixi\n uses: prefix-dev/setup-pixi@v0.8.1\n with:\n run-install: false\n - name: Update lockfiles\n run: |\n set -o pipefail\n pixi update --json | pixi exec pixi-diff-to-markdown >> diff.md\n - name: Create pull request\n uses: peter-evans/create-pull-request@v6\n with:\n token: ${{ secrets.GITHUB_TOKEN }}\n commit-message: Update pixi lockfile\n title: Update pixi lockfile\n body-path: diff.md\n branch: update-pixi\n base: main\n labels: pixi\n delete-branch: true\n add-paths: pixi.lock\n
peter-evans/create-pull-request
In order for this workflow to work, you need to set \"Allow GitHub Actions to create and approve pull requests\" to true in your repository settings (in \"Actions\" -> \"General\").
Tip
If you don't have any pypi-dependencies
, you can use pixi update --json --no-install
to speed up diff generation.
"},{"location":"advanced/updates_github_actions/#triggering-ci-in-automated-prs","title":"Triggering CI in automated PRs","text":"
In order to prevent accidental recursive GitHub Workflow runs, GitHub decided to not trigger any workflows on automated PRs when using the default GITHUB_TOKEN
. There are a couple of ways how to work around this limitation. You can find excellent documentation for this in peter-evans/create-pull-request
, see here.
You can customize the summary by either using command-line-arguments of pixi-diff-to-markdown
or by specifying the configuration in pixi.toml
under [tool.pixi-diff-to-markdown]
. See the pixi-diff-to-markdown documentation or run pixi-diff-to-markdown --help
for more information.
If you want to use the same workflow in multiple repositories in your GitHub organization, you can create a reusable workflow. You can find more information in the GitHub documentation.
"},{"location":"design_proposals/pixi_global_manifest/","title":"Pixi Global Manifest","text":"Feedback wanted
This document is work in progress, and community feedback is greatly appreciated. Please share your thoughts at our GitHub discussion.
"},{"location":"design_proposals/pixi_global_manifest/#motivation","title":"Motivation","text":"pixi global
is currently limited to imperatively managing CLI packages. The next iteration of this feature should fulfill the following needs:
There are a few things we wanted to keep in mind in the design:
The global environments and exposed will be managed by a human-readable manifest. This manifest will stick to conventions set by pixi.toml
where possible. Among other things it will be written in the TOML format, be named pixi-global.toml
and be placed at ~/.pixi/manifests/pixi-global.toml
. The motivation for the location is discussed further below
# The name of the environment is `python`\n[envs.python]\nchannels = [\"conda-forge\"]\n# optional, defaults to your current OS\nplatform = \"osx-64\"\n# It will expose python, python3 and python3.11, but not pip\n[envs.python.dependencies]\npython = \"3.11.*\"\npip = \"*\"\n\n[envs.python.exposed]\npython = \"python\"\npython3 = \"python3\"\n\"python3.11\" = \"python3.11\"\n\n# The name of the environment is `python3-10`\n[envs.python3-10]\nchannels = [\"https://fast.prefix.dev/conda-forge\"]\n# It will expose python3.10\n[envs.python3-10.dependencies]\npython = \"3.10.*\"\n\n[envs.python3-10.exposed]\n\"python3.10\" = \"python\"\n
"},{"location":"design_proposals/pixi_global_manifest/#cli","title":"CLI","text":"Install one or more packages PACKAGE
and expose their executables. If --environment
has been given, all packages will be installed in the same environment. If the environment already exists, the command will return with an error. --expose
can be given if --environment
is given as well or if only a single PACKAGE
will be installed. The syntax for MAPPING
is exposed_name=executable_name
, so for example python3.10=python
. --platform
sets the platform of the environment to PLATFORM
Multiple channels can be specified by using --channel
multiple times. By default, if no channel is provided, the default-channels
key in the pixi configuration is used, which again defaults to \"conda-forge\".
pixi global install [--expose MAPPING] [--environment ENV] [--platform PLATFORM] [--channel CHANNEL]... PACKAGE...\n
Remove environments ENV
.
pixi global uninstall <ENV>...\n
Update PACKAGE
if --package
is given. If not, all packages in environments ENV
will be updated. If the update leads to executables being removed, it will offer to remove the mappings. If the user declines the update process will stop. If the update leads to executables being added, it will offer for each binary individually to expose it. --assume-yes
will assume yes as answer for every question that would otherwise be asked interactively.
pixi global update [--package PACKAGE] [--assume-yes] <ENV>...\n
Updates all packages in all environments. If the update leads to executables being removed, it will offer to remove the mappings. If the user declines the update process will stop. If the update leads to executables being added, it will offer for each binary individually to expose it. --assume-yes
will assume yes as answer for every question that would otherwise be asked interactively.
pixi global update-all [--assume-yes]\n
Add one or more packages PACKAGE
into an existing environment ENV
. If environment ENV
does not exist, it will return with an error. Without --expose
no binary will be exposed. If you don't mention a spec like python=3.8.*
, the spec will be unconstrained with *
. The syntax for MAPPING
is exposed_name=executable_name
, so for example python3.10=python
.
pixi global add --environment ENV [--expose MAPPING] <PACKAGE>...\n
Remove package PACKAGE
from environment ENV
. If that was the last package remove the whole environment and print that information in the console. If this leads to executables being removed, it will offer to remove the mappings. If the user declines the remove process will stop.
pixi global remove --environment ENV PACKAGE\n
Add one or more MAPPING
for environment ENV
which describe which executables are exposed. The syntax for MAPPING
is exposed_name=executable_name
, so for example python3.10=python
.
pixi global expose add --environment ENV <MAPPING>...\n
Remove one or more exposed BINARY
from environment ENV
pixi global expose remove --environment ENV <BINARY>...\n
Ensure that the environments on the machine reflect the state in the manifest. The manifest is the single source of truth. Only if there's no manifest, will the data from existing environments be used to create a manifest. pixi global sync
is implied by most other pixi global
commands.
pixi global sync\n
List all environments, their specs and exposed executables
pixi global list\n
Set the channels CHANNEL
for a certain environment ENV
in the pixi global manifest.
pixi global channel set --environment ENV <CHANNEL>...\n
Set the platform PLATFORM
for a certain environment ENV
in the pixi global manifest.
pixi global platform set --environment ENV PLATFORM\n
"},{"location":"design_proposals/pixi_global_manifest/#simple-workflow","title":"Simple workflow","text":"Create environment python
, install package python=3.10.*
and expose all executables of that package
pixi global install python=3.10.*\n
Update all packages in environment python
pixi global update python\n
Remove environment python
pixi global uninstall python\n
Create environment python
and pip
, install corresponding packages and expose all executables of that packages
pixi global install python pip\n
Remove environments python
and pip
pixi global uninstall python pip\n
Create environment python-pip
, install python
and pip
in the same environment and expose all executables of these packages
pixi global install --environment python-pip python pip\n
"},{"location":"design_proposals/pixi_global_manifest/#adding-dependencies","title":"Adding dependencies","text":"Create environment python
, install package python
and expose all executables of that package. Then add package hypercorn
to environment python
but doesn't expose its executables.
pixi global install python\npixi global add --environment python hypercorn\n
Update package cryptography
(a dependency of hypercorn
) to 43.0.0
in environment python
pixi update --environment python cryptography=43.0.0\n
Then remove hypercorn
again.
pixi global remove --environment python hypercorn\n
"},{"location":"design_proposals/pixi_global_manifest/#specifying-which-executables-to-expose","title":"Specifying which executables to expose","text":"Make a new environment python3-10
with package python=3.10
and expose the python
executable as python3.10
.
pixi global install --environment python3-10 --expose \"python3.10=python\" python=3.10\n
Now python3.10
is available.
Run the following in order to expose python
from environment python3-10
as python3-10
instead.
pixi global expose remove --environment python3-10 python3.10\npixi global expose add --environment python3-10 \"python3-10=python\"\n
Now python3-10
is available, but python3.10
isn't anymore.
Most pixi global
sub commands imply a pixi global sync
.
install
/remove
/inject
/other global command
.First time, clean computer. Running the following creates manifest and ~/.pixi/envs/python
.
pixi global install python\n
Delete ~/.pixi
and syncing, should add environment python
again as described in the manifest
rm `~/.pixi/envs`\npixi global sync\n
If there's no manifest, but existing environments, pixi will create a manifest that matches your current environments. It is to be decided whether the user should be asked if they want an empty manifest instead, or if it should always import the data from the environments.
rm <manifest>\npixi global sync\n
If we remove the python environment from the manifest, running pixi global sync
will also remove the ~/.pixi/envs/python
environment from the file system.
vim <manifest>\npixi global sync\n
"},{"location":"design_proposals/pixi_global_manifest/#open-questions","title":"Open Questions","text":""},{"location":"design_proposals/pixi_global_manifest/#should-we-version-the-manifest","title":"Should we version the manifest?","text":"Something like:
[manifest]\nversion = 1\n
We still have to figure out which existing programs do something similar and how they benefit from it.
"},{"location":"design_proposals/pixi_global_manifest/#multiple-manifests","title":"Multiple manifests","text":"We could go for one default manifest, but also parse other manifests in the same directory. The only requirement to be parsed as manifest is a .toml
extension In order to modify those with the CLI
one would have to add an option --manifest
to select the correct one.
It is unclear whether the first implementation already needs to support this. At the very least we should put the manifest into its own folder like ~/.pixi/global/manifests/pixi-global.toml
In order to make it easier to manage manifests in version control, we could allow to set the manifest path via a key in the pixi configuration.
config.tomlglobal_manifests = \"/path/to/your/manifests\"\n
"},{"location":"examples/cpp-sdl/","title":"SDL example","text":" The cpp-sdl
example is located in the pixi repository.
git clone https://github.com/prefix-dev/pixi.git\n
Move to the example folder
cd pixi/examples/cpp-sdl\n
Run the start
command
pixi run start\n
Using the depends-on
feature you only needed to run the start
task but under water it is running the following tasks.
# Configure the CMake project\npixi run configure\n\n# Build the executable\npixi run build\n\n# Start the build executable\npixi run start\n
"},{"location":"examples/opencv/","title":"Opencv example","text":"The opencv
example is located in the pixi repository.
git clone https://github.com/prefix-dev/pixi.git\n
Move to the example folder
cd pixi/examples/opencv\n
"},{"location":"examples/opencv/#face-detection","title":"Face detection","text":"Run the start
command to start the face detection algorithm.
pixi run start\n
The screen that starts should look like this:
Check out the webcame_capture.py
to see how we detect a face.
Next to face recognition, a camera calibration example is also included.
You'll need a checkerboard for this to work. Print this:
Then run
pixi run calibrate\n
To make a picture for calibration press SPACE
Do this approximately 10 times with the chessboard in view of the camera
After that press ESC
which will start the calibration.
When the calibration is done, the camera will be used again to find the distance to the checkerboard.
"},{"location":"examples/ros2-nav2/","title":"Navigation 2 example","text":"The nav2
example is located in the pixi repository.
git clone https://github.com/prefix-dev/pixi.git\n
Move to the example folder
cd pixi/examples/ros2-nav2\n
Run the start
command
pixi run start\n
"},{"location":"features/advanced_tasks/","title":"Advanced tasks","text":"When building a package, you often have to do more than just run the code. Steps like formatting, linting, compiling, testing, benchmarking, etc. are often part of a project. With pixi tasks, this should become much easier to do.
Here are some quick examples
pixi.toml[tasks]\n# Commands as lists so you can also add documentation in between.\nconfigure = { cmd = [\n \"cmake\",\n # Use the cross-platform Ninja generator\n \"-G\",\n \"Ninja\",\n # The source is in the root directory\n \"-S\",\n \".\",\n # We wanna build in the .build directory\n \"-B\",\n \".build\",\n] }\n\n# Depend on other tasks\nbuild = { cmd = [\"ninja\", \"-C\", \".build\"], depends-on = [\"configure\"] }\n\n# Using environment variables\nrun = \"python main.py $PIXI_PROJECT_ROOT\"\nset = \"export VAR=hello && echo $VAR\"\n\n# Cross platform file operations\ncopy = \"cp pixi.toml pixi_backup.toml\"\nclean = \"rm pixi_backup.toml\"\nmove = \"mv pixi.toml backup.toml\"\n
"},{"location":"features/advanced_tasks/#depends-on","title":"Depends on","text":"Just like packages can depend on other packages, our tasks can depend on other tasks. This allows for complete pipelines to be run with a single command.
An obvious example is compiling before running an application.
Checkout our cpp_sdl
example for a running example. In that package we have some tasks that depend on each other, so we can assure that when you run pixi run start
everything is set up as expected.
pixi task add configure \"cmake -G Ninja -S . -B .build\"\npixi task add build \"ninja -C .build\" --depends-on configure\npixi task add start \".build/bin/sdl_example\" --depends-on build\n
Results in the following lines added to the pixi.toml
[tasks]\n# Configures CMake\nconfigure = \"cmake -G Ninja -S . -B .build\"\n# Build the executable but make sure CMake is configured first.\nbuild = { cmd = \"ninja -C .build\", depends-on = [\"configure\"] }\n# Start the built executable\nstart = { cmd = \".build/bin/sdl_example\", depends-on = [\"build\"] }\n
pixi run start\n
The tasks will be executed after each other:
configure
because it has no dependencies.build
as it only depends on configure
.start
as all it dependencies are run.If one of the commands fails (exit with non-zero code.) it will stop and the next one will not be started.
With this logic, you can also create aliases as you don't have to specify any command in a task.
pixi task add fmt ruff\npixi task add lint pylint\n
pixi task alias style fmt lint\n
Results in the following pixi.toml
.
fmt = \"ruff\"\nlint = \"pylint\"\nstyle = { depends-on = [\"fmt\", \"lint\"] }\n
Now run both tools with one command.
pixi run style\n
"},{"location":"features/advanced_tasks/#working-directory","title":"Working directory","text":"Pixi tasks support the definition of a working directory.
cwd
\" stands for Current Working Directory. The directory is relative to the pixi package root, where the pixi.toml
file is located.
Consider a pixi project structured as follows:
\u251c\u2500\u2500 pixi.toml\n\u2514\u2500\u2500 scripts\n \u2514\u2500\u2500 bar.py\n
To add a task to run the bar.py
file, use:
pixi task add bar \"python bar.py\" --cwd scripts\n
This will add the following line to manifest file:
pixi.toml[tasks]\nbar = { cmd = \"python bar.py\", cwd = \"scripts\" }\n
"},{"location":"features/advanced_tasks/#caching","title":"Caching","text":"When you specify inputs
and/or outputs
to a task, pixi will reuse the result of the task.
For the cache, pixi checks that the following are true:
If all of these conditions are met, pixi will not run the task again and instead use the existing result.
Inputs and outputs can be specified as globs, which will be expanded to all matching files.
pixi.toml[tasks]\n# This task will only run if the `main.py` file has changed.\nrun = { cmd = \"python main.py\", inputs = [\"main.py\"] }\n\n# This task will remember the result of the `curl` command and not run it again if the file `data.csv` already exists.\ndownload_data = { cmd = \"curl -o data.csv https://example.com/data.csv\", outputs = [\"data.csv\"] }\n\n# This task will only run if the `src` directory has changed and will remember the result of the `make` command.\nbuild = { cmd = \"make\", inputs = [\"src/*.cpp\", \"include/*.hpp\"], outputs = [\"build/app.exe\"] }\n
Note: if you want to debug the globs you can use the --verbose
flag to see which files are selected.
# shows info logs of all files that were selected by the globs\npixi run -v start\n
"},{"location":"features/advanced_tasks/#environment-variables","title":"Environment variables","text":"You can set environment variables for a task. These are seen as \"default\" values for the variables as you can overwrite them from the shell.
pixi.toml
[tasks]\necho = { cmd = \"echo $ARGUMENT\", env = { ARGUMENT = \"hello\" } }\n
If you run pixi run echo
it will output hello
. When you set the environment variable ARGUMENT
before running the task, it will use that value instead. ARGUMENT=world pixi run echo\n\u2728 Pixi task (echo in default): echo $ARGUMENT\nworld\n
These variables are not shared over tasks, so you need to define these for every task you want to use them in.
Extend instead of overwrite
If you use the same environment variable in the value as in the key of the map you will also overwrite the variable. For example overwriting a PATH
pixi.toml
[tasks]\necho = { cmd = \"echo $PATH\", env = { PATH = \"/tmp/path:$PATH\" } }\n
This will output /tmp/path:/usr/bin:/bin
instead of the original /usr/bin:/bin
."},{"location":"features/advanced_tasks/#clean-environment","title":"Clean environment","text":"You can make sure the environment of a task is \"pixi only\". Here pixi will only include the minimal required environment variables for your platform to run the command in. The environment will contain all variables set by the conda environment like \"CONDA_PREFIX\"
. It will however include some default values from the shell, like: \"DISPLAY\"
, \"LC_ALL\"
, \"LC_TIME\"
, \"LC_NUMERIC\"
, \"LC_MEASUREMENT\"
, \"SHELL\"
, \"USER\"
, \"USERNAME\"
, \"LOGNAME\"
, \"HOME\"
, \"HOSTNAME\"
,\"TMPDIR\"
, \"XPC_SERVICE_NAME\"
, \"XPC_FLAGS\"
[tasks]\nclean_command = { cmd = \"python run_in_isolated_env.py\", clean-env = true}\n
This setting can also be set from the command line with pixi run --clean-env TASK_NAME
. clean-env
not supported on Windows
On Windows it's hard to create a \"clean environment\" as conda-forge
doesn't ship Windows compilers and Windows needs a lot of base variables. Making this feature not worthy of implementing as the amount of edge cases will make it unusable.
To support the different OS's (Windows, OSX and Linux), pixi integrates a shell that can run on all of them. This is deno_task_shell
. The task shell is a limited implementation of a bourne-shell interface.
Next to running actual executable like ./myprogram
, cmake
or python
the shell has some built-in commandos.
cp
: Copies files.mv
: Moves files.rm
: Remove files or directories. Ex: rm -rf [FILE]...
- Commonly used to recursively delete files or directories.mkdir
: Makes directories. Ex. mkdir -p DIRECTORY...
- Commonly used to make a directory and all its parents with no error if it exists.pwd
: Prints the name of the current/working directory.sleep
: Delays for a specified amount of time. Ex. sleep 1
to sleep for 1 second, sleep 0.5
to sleep for half a second, or sleep 1m
to sleep a minuteecho
: Displays a line of text.cat
: Concatenates files and outputs them on stdout. When no arguments are provided, it reads and outputs stdin.exit
: Causes the shell to exit.unset
: Unsets environment variables.xargs
: Builds arguments from stdin and executes a command.&&
or ||
to separate two commands.&&
: if the command before &&
succeeds continue with the next command.||
: if the command before ||
fails continue with the next command.;
to run two commands without checking if the first command failed or succeeded.export ENV_VAR=value
$ENV_VAR
unset ENV_VAR
VAR=value
VAR=value && echo $VAR
|
: echo Hello | python receiving_app.py
|&
: use this to also get the stderr as input.$()
to use the output of a command as input for another command.python main.py $(git rev-parse HEAD)
!
before any command will negate the exit code from 1 to 0 or visa-versa.>
to redirect the stdout to a file.echo hello > file.txt
will put hello
in file.txt
and overwrite existing text.python main.py 2> file.txt
will put the stderr
output in file.txt
.python main.py &> file.txt
will put the stderr
and stdout
in file.txt
.echo hello >> file.txt
will append hello
to the existing file.txt
.*
to expand all options.echo *.py
will echo all filenames that end with .py
echo **/*.py
will echo all filenames that end with .py
in this directory and all descendant directories.echo data[0-9].csv
will echo all filenames that have a single number after data
and before .csv
More info in deno_task_shell
documentation.
Pixi is a tool to manage virtual environments. This document explains what an environment looks like and how to use it.
"},{"location":"features/environment/#structure","title":"Structure","text":"A pixi environment is located in the .pixi/envs
directory of the project. This location is not configurable as it is a specific design decision to keep the environments in the project directory. This keeps your machine and your project clean and isolated from each other, and makes it easy to clean up after a project is done.
If you look at the .pixi/envs
directory, you will see a directory for each environment, the default
being the one that is normally used, if you specify a custom environment the name you specified will be used.
.pixi\n\u2514\u2500\u2500 envs\n \u251c\u2500\u2500 cuda\n \u2502 \u251c\u2500\u2500 bin\n \u2502 \u251c\u2500\u2500 conda-meta\n \u2502 \u251c\u2500\u2500 etc\n \u2502 \u251c\u2500\u2500 include\n \u2502 \u251c\u2500\u2500 lib\n \u2502 ...\n \u2514\u2500\u2500 default\n \u251c\u2500\u2500 bin\n \u251c\u2500\u2500 conda-meta\n \u251c\u2500\u2500 etc\n \u251c\u2500\u2500 include\n \u251c\u2500\u2500 lib\n ...\n
These directories are conda environments, and you can use them as such, but you cannot manually edit them, this should always go through the pixi.toml
. Pixi will always make sure the environment is in sync with the pixi.lock
file. If this is not the case then all the commands that use the environment will automatically update the environment, e.g. pixi run
, pixi shell
.
If you want to clean up the environments, you can simply delete the .pixi/envs
directory, and pixi will recreate the environments when needed.
# either:\nrm -rf .pixi/envs\n\n# or per environment:\nrm -rf .pixi/envs/default\nrm -rf .pixi/envs/cuda\n
"},{"location":"features/environment/#activation","title":"Activation","text":"An environment is nothing more than a set of files that are installed into a certain location, that somewhat mimics a global system install. You need to activate the environment to use it. In the most simple sense that mean adding the bin
directory of the environment to the PATH
variable. But there is more to it in a conda environment, as it also sets some environment variables.
To do the activation we have multiple options:
pixi shell
command to open a shell with the environment activated.pixi shell-hook
command to print the command to activate the environment in your current shell.pixi run
command to run a command in the environment.Where the run
command is special as it runs its own cross-platform shell and has the ability to run tasks. More information about tasks can be found in the tasks documentation.
Using the pixi shell-hook
in pixi you would get the following output:
export PATH=\"/home/user/development/pixi/.pixi/envs/default/bin:/home/user/.local/bin:/home/user/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/home/user/.pixi/bin\"\nexport CONDA_PREFIX=\"/home/user/development/pixi/.pixi/envs/default\"\nexport PIXI_PROJECT_NAME=\"pixi\"\nexport PIXI_PROJECT_ROOT=\"/home/user/development/pixi\"\nexport PIXI_PROJECT_VERSION=\"0.12.0\"\nexport PIXI_PROJECT_MANIFEST=\"/home/user/development/pixi/pixi.toml\"\nexport CONDA_DEFAULT_ENV=\"pixi\"\nexport PIXI_ENVIRONMENT_PLATFORMS=\"osx-64,linux-64,win-64,osx-arm64\"\nexport PIXI_ENVIRONMENT_NAME=\"default\"\nexport PIXI_PROMPT=\"(pixi) \"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/activate-binutils_linux-64.sh\"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/activate-gcc_linux-64.sh\"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/activate-gfortran_linux-64.sh\"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/activate-gxx_linux-64.sh\"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/libglib_activate.sh\"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/rust.sh\"\n
It sets the PATH
and some more environment variables. But more importantly it also runs activation scripts that are presented by the installed packages. An example of this would be the libglib_activate.sh
script. Thus, just adding the bin
directory to the PATH
is not enough.
conda activate
-like activation","text":"If you prefer to use the traditional conda activate
-like activation, you could use the pixi shell-hook
command.
$ which python\npython not found\n$ eval \"$(pixi shell-hook)\"\n$ (default) which python\n/path/to/project/.pixi/envs/default/bin/python\n
Warning
It is not encouraged to use the traditional conda activate
-like activation, as deactivating the environment is not really possible. Use pixi shell
instead.
pixi
with direnv
","text":"Installing direnv Of course you can use pixi
to install direnv
globally. We recommend to run
pixi global install direnv
to install the latest version of direnv
on your computer.
This allows you to use pixi
in combination with direnv
. Enter the following into your .envrc
file:
watch_file pixi.lock # (1)!\neval \"$(pixi shell-hook)\" # (2)!\n
pixi.lock
changes, direnv
invokes the shell-hook again.direnv
ensures that the environment is deactivated when you leave the directory.$ cd my-project\ndirenv: error /my-project/.envrc is blocked. Run `direnv allow` to approve its content\n$ direnv allow\ndirenv: loading /my-project/.envrc\n\u2714 Project in /my-project is ready to use!\ndirenv: export +CONDA_DEFAULT_ENV +CONDA_PREFIX +PIXI_ENVIRONMENT_NAME +PIXI_ENVIRONMENT_PLATFORMS +PIXI_PROJECT_MANIFEST +PIXI_PROJECT_NAME +PIXI_PROJECT_ROOT +PIXI_PROJECT_VERSION +PIXI_PROMPT ~PATH\n$ which python\n/my-project/.pixi/envs/default/bin/python\n$ cd ..\ndirenv: unloading\n$ which python\npython not found\n
"},{"location":"features/environment/#environment-variables","title":"Environment variables","text":"The following environment variables are set by pixi, when using the pixi run
, pixi shell
, or pixi shell-hook
command:
PIXI_PROJECT_ROOT
: The root directory of the project.PIXI_PROJECT_NAME
: The name of the project.PIXI_PROJECT_MANIFEST
: The path to the manifest file (pixi.toml
).PIXI_PROJECT_VERSION
: The version of the project.PIXI_PROMPT
: The prompt to use in the shell, also used by pixi shell
itself.PIXI_ENVIRONMENT_NAME
: The name of the environment, defaults to default
.PIXI_ENVIRONMENT_PLATFORMS
: Comma separated list of platforms supported by the project.CONDA_PREFIX
: The path to the environment. (Used by multiple tools that already understand conda environments)CONDA_DEFAULT_ENV
: The name of the environment. (Used by multiple tools that already understand conda environments)PATH
: We prepend the bin
directory of the environment to the PATH
variable, so you can use the tools installed in the environment directly.INIT_CWD
: ONLY IN pixi run
: The directory where the command was run from.Note
Even though the variables are environment variables these cannot be overridden. E.g. you can not change the root of the project by setting PIXI_PROJECT_ROOT
in the environment.
When you run a command that uses the environment, pixi will check if the environment is in sync with the pixi.lock
file. If it is not, pixi will solve the environment and update it. This means that pixi will retrieve the best set of packages for the dependency requirements that you specified in the pixi.toml
and will put the output of the solve step into the pixi.lock
file. Solving is a mathematical problem and can take some time, but we take pride in the way we solve environments, and we are confident that we can solve your environment in a reasonable time. If you want to learn more about the solving process, you can read these:
Pixi solves both the conda
and PyPI
dependencies, where the PyPI
dependencies use the conda packages as a base, so you can be sure that the packages are compatible with each other. These solvers are split between the rattler
and rip
library, these control the heavy lifting of the solving process, which is executed by our custom SAT solver: resolvo
. resolve
is able to solve multiple ecosystem like conda
and PyPI
. It implements the lazy solving process for PyPI
packages, which means that it only downloads the metadata of the packages that are needed to solve the environment. It also supports the conda
way of solving, which means that it downloads the metadata of all the packages at once and then solves in one go.
For the [pypi-dependencies]
, rip
implements sdist
building to retrieve the metadata of the packages, and wheel
building to install the packages. For this building step, pixi
requires to first install python
in the (conda)[dependencies]
section of the pixi.toml
file. This will always be slower than the pure conda solves. So for the best pixi experience you should stay within the [dependencies]
section of the pixi.toml
file.
Pixi caches all previously downloaded packages in a cache folder. This cache folder is shared between all pixi projects and globally installed tools.
Normally the location would be the following platform-specific default cache folder:
$XDG_CACHE_HOME/rattler
or $HOME/.cache/rattler
$HOME/Library/Caches/rattler
%LOCALAPPDATA%\\rattler
This location is configurable by setting the PIXI_CACHE_DIR
or RATTLER_CACHE_DIR
environment variable.
When you want to clean the cache, you can simply delete the cache directory, and pixi will re-create the cache when needed.
The cache contains multiple folders concerning different caches from within pixi.
pkgs
: Contains the downloaded/unpacked conda
packages.repodata
: Contains the conda
repodata cache.uv-cache
: Contains the uv
cache. This includes multiple caches, e.g. built-wheels
wheels
archives
http-cache
: Contains the conda-pypi
mapping cache.pixi.lock
lock file","text":"A lock file is the protector of the environments, and pixi is the key to unlock it.
"},{"location":"features/lockfile/#what-is-a-lock-file","title":"What is a lock file?","text":"A lock file locks the environment in a specific state. Within pixi a lock file is a description of the packages in an environment. The lock file contains two definitions:
The environments that are used in the project with their complete set of packages. e.g.:
environments:\n default:\n channels:\n - url: https://conda.anaconda.org/conda-forge/\n packages:\n linux-64:\n ...\n - conda: https://conda.anaconda.org/conda-forge/linux-64/python-3.12.2-hab00c5b_0_cpython.conda\n ...\n osx-64:\n ...\n - conda: https://conda.anaconda.org/conda-forge/osx-64/python-3.12.2-h9f0c242_0_cpython.conda\n ...\n
The definition of the packages themselves. e.g.:
- kind: conda\n name: python\n version: 3.12.2\n build: h9f0c242_0_cpython\n subdir: osx-64\n url: https://conda.anaconda.org/conda-forge/osx-64/python-3.12.2-h9f0c242_0_cpython.conda\n sha256: 7647ac06c3798a182a4bcb1ff58864f1ef81eb3acea6971295304c23e43252fb\n md5: 0179b8007ba008cf5bec11f3b3853902\n depends:\n - bzip2 >=1.0.8,<2.0a0\n - libexpat >=2.5.0,<3.0a0\n - libffi >=3.4,<4.0a0\n - libsqlite >=3.45.1,<4.0a0\n - libzlib >=1.2.13,<1.3.0a0\n - ncurses >=6.4,<7.0a0\n - openssl >=3.2.1,<4.0a0\n - readline >=8.2,<9.0a0\n - tk >=8.6.13,<8.7.0a0\n - tzdata\n - xz >=5.2.6,<6.0a0\n constrains:\n - python_abi 3.12.* *_cp312\n license: Python-2.0\n size: 14596811\n timestamp: 1708118065292\n
Pixi uses the lock file for the following reasons:
This gives you (and your collaborators) a way to really reproduce the environment they are working in. Using tools such as docker suddenly becomes much less necessary.
"},{"location":"features/lockfile/#when-is-a-lock-file-generated","title":"When is a lock file generated?","text":"A lock file is generated when you install a package. More specifically, a lock file is generated from the solve step of the installation process. The solve will return a list of packages that are to be installed, and the lock file will be generated from this list. This diagram tries to explain the process:
graph TD\n A[Install] --> B[Solve]\n B --> C[Generate and write lock file]\n C --> D[Install Packages]
"},{"location":"features/lockfile/#how-to-use-a-lock-file","title":"How to use a lock file","text":"Do not edit the lock file
A lock file is a machine only file, and should not be edited by hand.
That said, the pixi.lock
is human-readable, so it's easy to track the changes in the environment. We recommend you track the lock file in git
or other version control systems. This will ensure that the environment is always reproducible and that you can always revert back to a working state, in case something goes wrong. The pixi.lock
and the manifest file pixi.toml
/pyproject.toml
should always be in sync.
Running the following commands will check and automatically update the lock file if you changed any dependencies:
pixi install
pixi run
pixi shell
pixi shell-hook
pixi tree
pixi list
pixi add
pixi remove
All the commands that support the interaction with the lock file also include some lock file usage options:
--frozen
: install the environment as defined in the lock file, doesn't update pixi.lock
if it isn't up-to-date with manifest file. It can also be controlled by the PIXI_FROZEN
environment variable (example: PIXI_FROZEN=true
).--locked
: only install if the pixi.lock
is up-to-date with the manifest file[^1]. It can also be controlled by the PIXI_LOCKED
environment variable (example: PIXI_LOCKED=true
). Conflicts with --frozen
.Syncing the lock file with the manifest file
The lock file is always matched with the whole configuration in the manifest file. This means that if you change the manifest file, the lock file will be updated.
flowchart TD\n C[manifest] --> A[lockfile] --> B[environment]
"},{"location":"features/lockfile/#lockfile-satisfiability","title":"Lockfile satisfiability","text":"The lock file is a description of the environment, and it should always be satisfiable. Satisfiable means that the given manifest file and the created environment are in sync with the lockfile. If the lock file is not satisfiable, pixi will generate a new lock file automatically.
Steps to check if the lock file is satisfiable:
environments
in the manifest file are in the lock filechannels
in the manifest file are in the lock filepackages
in the manifest file are in the lock file, and the versions in the lock file are compatible with the requirements in the manifest file, for both conda
and pypi
packages.matchspec
which can match on all the information we store in the lockfile, even timestamp
, subdir
and license
.pypi-dependencies
are added, all conda
package that are python packages in the lock file have a purls
field.pypi
editable packages are correct.If you want to get more details checkout the actual code as this is a simplification of the actual code.
"},{"location":"features/lockfile/#the-version-of-the-lock-file","title":"The version of the lock file","text":"The lock file has a version number, this is to ensure that the lock file is compatible with the local version of pixi
.
version: 4\n
Pixi is backward compatible with the lock file, but not forward compatible. This means that you can use an older lock file with a newer version of pixi
, but not the other way around.
The lock file can grow quite large, especially if you have a lot of packages installed. This is because the lock file contains all the information about the packages.
If you can not think of a case where you would benefit from a fast reproducible environment, then you don't need a lock file.
But take note of the following:
If you want to remove the lock file, you can simply delete it.
rm pixi.lock\n
This will remove the lock file, and the next time you run a command that requires the lock file, it will be generated again.
Note
This does remove the locked state of the environment, and the environment will be updated to the latest version of the packages.
"},{"location":"features/multi_environment/","title":"Multi Environment Support","text":""},{"location":"features/multi_environment/#motivating-example","title":"Motivating Example","text":"There are multiple scenarios where multiple environments are useful.
py39
and py310
or polars 0.12
and 0.13
.lint
or docs
.dev
.prod
and test-prod
where test-prod
is a strict superset of prod
.cuda
environment and a cpu
environment.This prepares pixi
for use in large projects with multiple use-cases, multiple developers and different CI needs.
There are a few things we wanted to keep in mind in the design:
Introduce environment sets into the pixi.toml
this describes environments based on feature
's. Introduce features into the pixi.toml
that can describe parts of environments. As an environment goes beyond just dependencies
the features
should be described including the following fields:
dependencies
: The conda package dependenciespypi-dependencies
: The pypi package dependenciessystem-requirements
: The system requirements of the environmentactivation
: The activation information for the environmentplatforms
: The platforms the environment can be run on.channels
: The channels used to create the environment. Adding the priority
field to the channels to allow concatenation of channels instead of overwriting.target
: All the above features but also separated by targets.tasks
: Feature specific tasks, tasks in one environment are selected as default tasks for the environment.[dependencies] # short for [feature.default.dependencies]\npython = \"*\"\nnumpy = \"==2.3\"\n\n[pypi-dependencies] # short for [feature.default.pypi-dependencies]\npandas = \"*\"\n\n[system-requirements] # short for [feature.default.system-requirements]\nlibc = \"2.33\"\n\n[activation] # short for [feature.default.activation]\nscripts = [\"activate.sh\"]\n
Different dependencies per feature[feature.py39.dependencies]\npython = \"~=3.9.0\"\n[feature.py310.dependencies]\npython = \"~=3.10.0\"\n[feature.test.dependencies]\npytest = \"*\"\n
Full set of environment modification in one feature[feature.cuda]\ndependencies = {cuda = \"x.y.z\", cudnn = \"12.0\"}\npypi-dependencies = {torch = \"1.9.0\"}\nplatforms = [\"linux-64\", \"osx-arm64\"]\nactivation = {scripts = [\"cuda_activation.sh\"]}\nsystem-requirements = {cuda = \"12\"}\n# Channels concatenate using a priority instead of overwrite, so the default channels are still used.\n# Using the priority the concatenation is controlled, default is 0, the default channels are used last.\n# Highest priority comes first.\nchannels = [\"nvidia\", {channel = \"pytorch\", priority = -1}] # Results in: [\"nvidia\", \"conda-forge\", \"pytorch\"] when the default is `conda-forge`\ntasks = { warmup = \"python warmup.py\" }\ntarget.osx-arm64 = {dependencies = {mlx = \"x.y.z\"}}\n
Define tasks as defaults of an environment[feature.test.tasks]\ntest = \"pytest\"\n\n[environments]\ntest = [\"test\"]\n\n# `pixi run test` == `pixi run --environment test test`\n
The environment definition should contain the following fields:
features: Vec<Feature>
: The features that are included in the environment set, which is also the default field in the environments.solve-group: String
: The solve group is used to group environments together at the solve stage. This is useful for environments that need to have the same dependencies but might extend them with additional dependencies. For instance when testing a production environment with additional test dependencies.[environments]\n# implicit: default = [\"default\"]\ndefault = [\"py39\"] # implicit: default = [\"py39\", \"default\"]\npy310 = [\"py310\"] # implicit: py310 = [\"py310\", \"default\"]\ntest = [\"test\"] # implicit: test = [\"test\", \"default\"]\ntest39 = [\"test\", \"py39\"] # implicit: test39 = [\"test\", \"py39\", \"default\"]\n
Testing a production environment with additional dependencies[environments]\n# Creating a `prod` environment which is the minimal set of dependencies used for production.\nprod = {features = [\"py39\"], solve-group = \"prod\"}\n# Creating a `test_prod` environment which is the `prod` environment plus the `test` feature.\ntest_prod = {features = [\"py39\", \"test\"], solve-group = \"prod\"}\n# Using the `solve-group` to solve the `prod` and `test_prod` environments together\n# Which makes sure the tested environment has the same version of the dependencies as the production environment.\n
Creating environments without including the default feature[dependencies]\npython = \"*\"\nnumpy = \"*\"\n\n[feature.lint.dependencies]\npre-commit = \"*\"\n\n[environments]\n# Create a custom environment which only has the `lint` feature (numpy isn't part of that env).\nlint = {features = [\"lint\"], no-default-feature = true}\n
"},{"location":"features/multi_environment/#lock-file-structure","title":"lock file Structure","text":"Within the pixi.lock
file, a package may now include an additional environments
field, specifying the environment to which it belongs. To avoid duplication the packages environments
field may contain multiple environments so the lock file is of minimal size.
- platform: linux-64\n name: pre-commit\n version: 3.3.3\n category: main\n environments:\n - dev\n - test\n - lint\n ...:\n- platform: linux-64\n name: python\n version: 3.9.3\n category: main\n environments:\n - dev\n - test\n - lint\n - py39\n - default\n ...:\n
"},{"location":"features/multi_environment/#user-interface-environment-activation","title":"User Interface Environment Activation","text":"Users can manually activate the desired environment via command line or configuration. This approach guarantees a conflict-free environment by allowing only one feature set to be active at a time. For the user the cli would look like this:
Default behavior\u279c pixi run python\n# Runs python in the `default` environment\n
Activating an specific environment\u279c pixi run -e test pytest\n\u279c pixi run --environment test pytest\n# Runs `pytest` in the `test` environment\n
Activating a shell in an environment\u279c pixi shell -e cuda\npixi shell --environment cuda\n# Starts a shell in the `cuda` environment\n
Running any command in an environment\u279c pixi run -e test any_command\n# Runs any_command in the `test` environment which doesn't require to be predefined as a task.\n
"},{"location":"features/multi_environment/#ambiguous-environment-selection","title":"Ambiguous Environment Selection","text":"It's possible to define tasks in multiple environments, in this case the user should be prompted to select the environment.
Here is a simple example of a task only manifest:
pixi.toml
[project]\nname = \"test_ambiguous_env\"\nchannels = []\nplatforms = [\"linux-64\", \"win-64\", \"osx-64\", \"osx-arm64\"]\n\n[tasks]\ndefault = \"echo Default\"\nambi = \"echo Ambi::Default\"\n[feature.test.tasks]\ntest = \"echo Test\"\nambi = \"echo Ambi::Test\"\n\n[feature.dev.tasks]\ndev = \"echo Dev\"\nambi = \"echo Ambi::Dev\"\n\n[environments]\ndefault = [\"test\", \"dev\"]\ntest = [\"test\"]\ndev = [\"dev\"]\n
Trying to run the abmi
task will prompt the user to select the environment. As it is available in all environments. Interactive selection of environments if task is in multiple environments\u279c pixi run ambi\n? The task 'ambi' can be run in multiple environments.\n\nPlease select an environment to run the task in: \u203a\n\u276f default # selecting default\n test\n dev\n\n\u2728 Pixi task (ambi in default): echo Ambi::Test\nAmbi::Test\n
As you can see it runs the task defined in the feature.task
but it is run in the default
environment. This happens because the ambi
task is defined in the test
feature, and it is overwritten in the default environment. So the tasks.default
is now non-reachable from any environment.
Some other results running in this example:
\u279c pixi run --environment test ambi\n\u2728 Pixi task (ambi in test): echo Ambi::Test\nAmbi::Test\n\n\u279c pixi run --environment dev ambi\n\u2728 Pixi task (ambi in dev): echo Ambi::Dev\nAmbi::Dev\n\n# dev is run in the default environment\n\u279c pixi run dev\n\u2728 Pixi task (dev in default): echo Dev\nDev\n\n# dev is run in the dev environment\n\u279c pixi run -e dev dev\n\u2728 Pixi task (dev in dev): echo Dev\nDev\n
"},{"location":"features/multi_environment/#important-links","title":"Important links","text":"In polarify
they want to test multiple versions combined with multiple versions of polars. This is currently done by using a matrix in GitHub actions. This can be replaced by using multiple environments.
[project]\nname = \"polarify\"\n# ...\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\", \"osx-arm64\", \"osx-64\", \"win-64\"]\n\n[tasks]\npostinstall = \"pip install --no-build-isolation --no-deps --disable-pip-version-check -e .\"\n\n[dependencies]\npython = \">=3.9\"\npip = \"*\"\npolars = \">=0.14.24,<0.21\"\n\n[feature.py39.dependencies]\npython = \"3.9.*\"\n[feature.py310.dependencies]\npython = \"3.10.*\"\n[feature.py311.dependencies]\npython = \"3.11.*\"\n[feature.py312.dependencies]\npython = \"3.12.*\"\n[feature.pl017.dependencies]\npolars = \"0.17.*\"\n[feature.pl018.dependencies]\npolars = \"0.18.*\"\n[feature.pl019.dependencies]\npolars = \"0.19.*\"\n[feature.pl020.dependencies]\npolars = \"0.20.*\"\n\n[feature.test.dependencies]\npytest = \"*\"\npytest-md = \"*\"\npytest-emoji = \"*\"\nhypothesis = \"*\"\n[feature.test.tasks]\ntest = \"pytest\"\n\n[feature.lint.dependencies]\npre-commit = \"*\"\n[feature.lint.tasks]\nlint = \"pre-commit run --all\"\n\n[environments]\npl017 = [\"pl017\", \"py39\", \"test\"]\npl018 = [\"pl018\", \"py39\", \"test\"]\npl019 = [\"pl019\", \"py39\", \"test\"]\npl020 = [\"pl020\", \"py39\", \"test\"]\npy39 = [\"py39\", \"test\"]\npy310 = [\"py310\", \"test\"]\npy311 = [\"py311\", \"test\"]\npy312 = [\"py312\", \"test\"]\n
.github/workflows/test.ymljobs:\n tests-per-env:\n runs-on: ubuntu-latest\n strategy:\n matrix:\n environment: [py311, py312]\n steps:\n - uses: actions/checkout@v4\n - uses: prefix-dev/setup-pixi@v0.5.1\n with:\n environments: ${{ matrix.environment }}\n - name: Run tasks\n run: |\n pixi run --environment ${{ matrix.environment }} test\n tests-with-multiple-envs:\n runs-on: ubuntu-latest\n steps:\n - uses: actions/checkout@v4\n - uses: prefix-dev/setup-pixi@v0.5.1\n with:\n environments: pl017 pl018\n - run: |\n pixi run -e pl017 test\n pixi run -e pl018 test\n
Test vs Production example This is an example of a project that has a test
feature and prod
environment. The prod
environment is a production environment that contains the run dependencies. The test
feature is a set of dependencies and tasks that we want to put on top of the previously solved prod
environment. This is a common use case where we want to test the production environment with additional dependencies.
pixi.toml
[project]\nname = \"my-app\"\n# ...\nchannels = [\"conda-forge\"]\nplatforms = [\"osx-arm64\", \"linux-64\"]\n\n[tasks]\npostinstall-e = \"pip install --no-build-isolation --no-deps --disable-pip-version-check -e .\"\npostinstall = \"pip install --no-build-isolation --no-deps --disable-pip-version-check .\"\ndev = \"uvicorn my_app.app:main --reload\"\nserve = \"uvicorn my_app.app:main\"\n\n[dependencies]\npython = \">=3.12\"\npip = \"*\"\npydantic = \">=2\"\nfastapi = \">=0.105.0\"\nsqlalchemy = \">=2,<3\"\nuvicorn = \"*\"\naiofiles = \"*\"\n\n[feature.test.dependencies]\npytest = \"*\"\npytest-md = \"*\"\npytest-asyncio = \"*\"\n[feature.test.tasks]\ntest = \"pytest --md=report.md\"\n\n[environments]\n# both default and prod will have exactly the same dependency versions when they share a dependency\ndefault = {features = [\"test\"], solve-group = \"prod-group\"}\nprod = {features = [], solve-group = \"prod-group\"}\n
In ci you would run the following commands: pixi run postinstall-e && pixi run test\n
Locally you would run the following command: pixi run postinstall-e && pixi run dev\n
Then in a Dockerfile you would run the following command: Dockerfile
FROM ghcr.io/prefix-dev/pixi:latest # this doesn't exist yet\nWORKDIR /app\nCOPY . .\nRUN pixi run --environment prod postinstall\nEXPOSE 8080\nCMD [\"/usr/local/bin/pixi\", \"run\", \"--environment\", \"prod\", \"serve\"]\n
Multiple machines from one project This is an example for an ML project that should be executable on a machine that supports cuda
and mlx
. It should also be executable on machines that don't support cuda
or mlx
, we use the cpu
feature for this.
[project]\nname = \"my-ml-project\"\ndescription = \"A project that does ML stuff\"\nauthors = [\"Your Name <your.name@gmail.com>\"]\nchannels = [\"conda-forge\", \"pytorch\"]\n# All platforms that are supported by the project as the features will take the intersection of the platforms defined there.\nplatforms = [\"win-64\", \"linux-64\", \"osx-64\", \"osx-arm64\"]\n\n[tasks]\ntrain-model = \"python train.py\"\nevaluate-model = \"python test.py\"\n\n[dependencies]\npython = \"3.11.*\"\npytorch = {version = \">=2.0.1\", channel = \"pytorch\"}\ntorchvision = {version = \">=0.15\", channel = \"pytorch\"}\npolars = \">=0.20,<0.21\"\nmatplotlib-base = \">=3.8.2,<3.9\"\nipykernel = \">=6.28.0,<6.29\"\n\n[feature.cuda]\nplatforms = [\"win-64\", \"linux-64\"]\nchannels = [\"nvidia\", {channel = \"pytorch\", priority = -1}]\nsystem-requirements = {cuda = \"12.1\"}\n\n[feature.cuda.tasks]\ntrain-model = \"python train.py --cuda\"\nevaluate-model = \"python test.py --cuda\"\n\n[feature.cuda.dependencies]\npytorch-cuda = {version = \"12.1.*\", channel = \"pytorch\"}\n\n[feature.mlx]\nplatforms = [\"osx-arm64\"]\n# MLX is only available on macOS >=13.5 (>14.0 is recommended)\nsystem-requirements = {macos = \"13.5\"}\n\n[feature.mlx.tasks]\ntrain-model = \"python train.py --mlx\"\nevaluate-model = \"python test.py --mlx\"\n\n[feature.mlx.dependencies]\nmlx = \">=0.16.0,<0.17.0\"\n\n[feature.cpu]\nplatforms = [\"win-64\", \"linux-64\", \"osx-64\", \"osx-arm64\"]\n\n[environments]\ncuda = [\"cuda\"]\nmlx = [\"mlx\"]\ndefault = [\"cpu\"]\n
Running the project on a cuda machinepixi run train-model --environment cuda\n# will execute `python train.py --cuda`\n# fails if not on linux-64 or win-64 with cuda 12.1\n
Running the project with mlxpixi run train-model --environment mlx\n# will execute `python train.py --mlx`\n# fails if not on osx-arm64\n
Running the project on a machine without cuda or mlxpixi run train-model\n
"},{"location":"features/multi_platform_configuration/","title":"Multi platform config","text":"Pixi's vision includes being supported on all major platforms. Sometimes that needs some extra configuration to work well. On this page, you will learn what you can configure to align better with the platform you are making your application for.
Here is an example manifest file that highlights some of the features:
pixi.toml
pyproject.toml
pixi.toml[project]\n# Default project info....\n# A list of platforms you are supporting with your package.\nplatforms = [\"win-64\", \"linux-64\", \"osx-64\", \"osx-arm64\"]\n\n[dependencies]\npython = \">=3.8\"\n\n[target.win-64.dependencies]\n# Overwrite the needed python version only on win-64\npython = \"3.7\"\n\n\n[activation]\nscripts = [\"setup.sh\"]\n\n[target.win-64.activation]\n# Overwrite activation scripts only for windows\nscripts = [\"setup.bat\"]\n
pyproject.toml[tool.pixi.project]\n# Default project info....\n# A list of platforms you are supporting with your package.\nplatforms = [\"win-64\", \"linux-64\", \"osx-64\", \"osx-arm64\"]\n\n[tool.pixi.dependencies]\npython = \">=3.8\"\n\n[tool.pixi.target.win-64.dependencies]\n# Overwrite the needed python version only on win-64\npython = \"~=3.7.0\"\n\n\n[tool.pixi.activation]\nscripts = [\"setup.sh\"]\n\n[tool.pixi.target.win-64.activation]\n# Overwrite activation scripts only for windows\nscripts = [\"setup.bat\"]\n
"},{"location":"features/multi_platform_configuration/#platform-definition","title":"Platform definition","text":"The project.platforms
defines which platforms your project supports. When multiple platforms are defined, pixi determines which dependencies to install for each platform individually. All of this is stored in a lock file.
Running pixi install
on a platform that is not configured will warn the user that it is not setup for that platform:
\u276f pixi install\n \u00d7 the project is not configured for your current platform\n \u256d\u2500[pixi.toml:6:1]\n 6 \u2502 channels = [\"conda-forge\"]\n 7 \u2502 platforms = [\"osx-64\", \"osx-arm64\", \"win-64\"]\n \u00b7 \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u00b7 \u2570\u2500\u2500 add 'linux-64' here\n 8 \u2502\n \u2570\u2500\u2500\u2500\u2500\n help: The project needs to be configured to support your platform (linux-64).\n
"},{"location":"features/multi_platform_configuration/#target-specifier","title":"Target specifier","text":"With the target specifier, you can overwrite the original configuration specifically for a single platform. If you are targeting a specific platform in your target specifier that was not specified in your project.platforms
then pixi will throw an error.
It might happen that you want to install a certain dependency only on a specific platform, or you might want to use a different version on different platforms.
pixi.toml[dependencies]\npython = \">=3.8\"\n\n[target.win-64.dependencies]\nmsmpi = \"*\"\npython = \"3.8\"\n
In the above example, we specify that we depend on msmpi
only on Windows. We also specifically want python
on 3.8
when installing on Windows. This will overwrite the dependencies from the generic set of dependencies. This will not touch any of the other platforms.
You can use pixi's cli to add these dependencies to the manifest file.
pixi add --platform win-64 posix\n
This also works for the host
and build
dependencies.
pixi add --host --platform win-64 posix\npixi add --build --platform osx-64 clang\n
Which results in this.
pixi.toml[target.win-64.host-dependencies]\nposix = \"1.0.0.*\"\n\n[target.osx-64.build-dependencies]\nclang = \"16.0.6.*\"\n
"},{"location":"features/multi_platform_configuration/#activation","title":"Activation","text":"Pixi's vision is to enable completely cross-platform projects, but you often need to run tools that are not built by your projects. Generated activation scripts are often in this category, default scripts in unix are bash
and for windows they are bat
To deal with this, you can define your activation scripts using the target definition.
pixi.toml
[activation]\nscripts = [\"setup.sh\", \"local_setup.bash\"]\n\n[target.win-64.activation]\nscripts = [\"setup.bat\", \"local_setup.bat\"]\n
When this project is run on win-64
it will only execute the target scripts not the scripts specified in the default activation.scripts
"},{"location":"features/system_requirements/","title":"System Requirements in pixi","text":"System requirements define the minimal system specifications necessary during dependency resolution for a project. For instance, specifying a Unix system with a particular minimal libc
version ensures that dependencies are compatible with the project's environment.
System specifications are closely related to virtual packages, allowing for flexible and accurate dependency management.
"},{"location":"features/system_requirements/#default-system-requirements","title":"Default System Requirements","text":"The following configurations outline the default minimal system requirements for different operating systems:
LinuxWindowsosx-64osx-arm64# Default system requirements for Linux\n[system-requirements]\nlinux = \"4.18\"\nlibc = { family = \"glibc\", version = \"2.28\" }\n
Windows currently has no minimal system requirements defined. If your project requires specific Windows configurations, you should define them accordingly.
# Default system requirements for macOS\n[system-requirements]\nmacos = \"13.0\"\n
# Default system requirements for macOS ARM64\n[system-requirements]\nmacos = \"13.0\"\n
"},{"location":"features/system_requirements/#customizing-system-requirements","title":"Customizing System Requirements","text":"You only need to define system requirements if your project necessitates a different set from the defaults. This is common when installing environments on older or newer versions of operating systems.
"},{"location":"features/system_requirements/#adjusting-for-older-systems","title":"Adjusting for Older Systems","text":"If you're encountering an error like:
\u00d7 The current system has a mismatching virtual package. The project requires '__linux' to be at least version '4.18' but the system has version '4.12.14'\n
This indicates that the project's system requirements are higher than your current system's specifications. To resolve this, you can lower the system requirements in your project's configuration:
[system-requirements]\nlinux = \"4.12.14\"\n
This adjustment informs the dependency resolver to accommodate the older system version.
"},{"location":"features/system_requirements/#using-cuda-in-pixi","title":"Using CUDA in pixi","text":"To utilize CUDA in your project, you must specify the desired CUDA version in the system-requirements table. This ensures that CUDA is recognized and appropriately locked into the lock file if necessary.
Example Configuration
[system-requirements]\ncuda = \"12\" # Replace \"12\" with the specific CUDA version you intend to use\n
"},{"location":"features/system_requirements/#setting-system-requirements-environment-specific","title":"Setting System Requirements environment specific","text":"This can be set per feature
in the the manifest
file.
[feature.cuda.system-requirements]\ncuda = \"12\"\n\n[environments]\ncuda = [\"cuda\"]\n
"},{"location":"features/system_requirements/#available-override-options","title":"Available Override Options","text":"In certain scenarios, you might need to override the system requirements detected on your machine. This can be particularly useful when working on systems that do not meet the project's default requirements.
You can override virtual packages by setting the following environment variables:
CONDA_OVERRIDE_CUDA
- Description: Sets the CUDA version. - Usage Example: CONDA_OVERRIDE_CUDA=11
CONDA_OVERRIDE_GLIBC
- Description: Sets the glibc version. - Usage Example: CONDA_OVERRIDE_GLIBC=2.28
CONDA_OVERRIDE_OSX
- Description: Sets the macOS version. - Usage Example: CONDA_OVERRIDE_OSX=13.0
For more detailed information on managing virtual packages
and overriding system requirements, refer to the Conda Documentation.
Using JupyterLab with pixi is very simple. You can just create a new pixi project and add the jupyterlab
package to it. The full example is provided under the following Github link.
pixi init\npixi add jupyterlab\n
This will create a new pixi project and add the jupyterlab
package to it. You can then start JupyterLab using the following command:
pixi run jupyter lab\n
If you want to add more \"kernels\" to JupyterLab, you can simply add them to your current project \u2013 as well as any dependencies from the scientific stack you might need.
pixi add bash_kernel ipywidgets matplotlib numpy pandas # ...\n
"},{"location":"ide_integration/jupyterlab/#what-kernels-are-available","title":"What kernels are available?","text":"You can easily install more \"kernels\" for JupyterLab. The conda-forge
repository has a number of interesting additional kernels - not just Python!
bash_kernel
A kernel for bashxeus-cpp
A C++ kernel based on the new clang-replxeus-cling
A C++ kernel based on the slightly older Clingxeus-lua
A Lua kernelxeus-sql
A kernel for SQLr-irkernel
An R kernelIf you want to have only one instance of JupyterLab running but still want per-directory Pixi environments, you can use one of the kernels provided by the pixi-kernel
package.
To get started, create a Pixi project, add jupyterlab
and pixi-kernel
and then start JupyterLab:
pixi init\npixi add jupyterlab pixi-kernel\npixi run jupyter lab\n
This will start JupyterLab and open it in your browser.
pixi-kernel
searches for a manifest file, either pixi.toml
or pyproject.toml
, in the same directory of your notebook or in any parent directory. When it finds one, it will use the environment specified in the manifest file to start the kernel and run your notebooks.
If you just want to check a JupyterLab environment running in the cloud using pixi-kernel
, you can visit Binder.
You can use PyCharm with pixi environments by using the conda
shim provided by the pixi-pycharm package.
To get started, add pixi-pycharm
to your pixi project.
pixi add pixi-pycharm\n
This will ensure that the conda shim is installed in your project's environment.
Having pixi-pycharm
installed, you can now configure PyCharm to use your pixi environments. Go to the Add Python Interpreter dialog (bottom right corner of the PyCharm window) and select Conda Environment. Set Conda Executable to the full path of the conda
file (on Windows: conda.bat
) which is located in .pixi/envs/default/libexec
. You can get the path using the following command:
pixi run 'echo $CONDA_PREFIX/libexec/conda'\n
pixi run 'echo $CONDA_PREFIX\\\\libexec\\\\conda.bat'\n
This is an executable that tricks PyCharm into thinking it's the proper conda
executable. Under the hood it redirects all calls to the corresponding pixi
equivalent.
Use the conda shim from this pixi project
Please make sure that this is the conda
shim from this pixi project and not another one. If you use multiple pixi projects, you might have to adjust the path accordingly as PyCharm remembers the path to the conda executable.
Having selected the environment, PyCharm will now use the Python interpreter from your pixi environment.
PyCharm should now be able to show you the installed packages as well.
You can now run your programs and tests as usual.
Mark .pixi
as excluded
In order for PyCharm to not get confused about the .pixi
directory, please mark it as excluded.
Also, when using a remote interpreter, you should exclude the .pixi
directory on the remote machine. Instead, you should run pixi install
on the remote machine and select the conda shim from there.
If your project uses multiple environments to tests different Python versions or dependencies, you can add multiple environments to PyCharm by specifying Use existing environment in the Add Python Interpreter dialog.
You can then specify the corresponding environment in the bottom right corner of the PyCharm window.
"},{"location":"ide_integration/pycharm/#multiple-pixi-projects","title":"Multiple pixi projects","text":"
When using multiple pixi projects, remember to select the correct Conda Executable for each project as mentioned above. It also might come up that you have multiple environments it might come up that you have multiple environments with the same name.
It is recommended to rename the environments to something unique.
"},{"location":"ide_integration/pycharm/#debugging","title":"Debugging","text":"Logs are written to ~/.cache/pixi-pycharm.log
. You can use them to debug problems. Please attach the logs when filing a bug report.
You can use pixi
to manage your R dependencies. The conda-forge channel contains a wide range of R packages that can be installed using pixi
.
R packages are usually prefixed with r-
in the conda-forge channel. To install an R package, you can use the following command:
pixi add r-<package-name>\n# for example\npixi add r-ggplot2\n
"},{"location":"ide_integration/r_studio/#using-r-packages-in-rstudio","title":"Using R packages in RStudio","text":"To use the R packages installed by pixi
in RStudio, you need to run rstudio
from an activated environment. This can be achieved by running RStudio from pixi shell
or from a task in the pixi.toml
file.
The full example can be found here: RStudio example. Here is an example of a pixi.toml
file that sets up an RStudio task:
[project]\nname = \"r\"\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\", \"osx-64\", \"osx-arm64\"]\n\n[target.linux.tasks]\nrstudio = \"rstudio\"\n\n[target.osx.tasks]\nrstudio = \"open -a rstudio\"\n# or alternatively with the full path:\n# rstudio = \"/Applications/RStudio.app/Contents/MacOS/RStudio\"\n\n[dependencies]\nr = \">=4.3,<5\"\nr-ggplot2 = \">=3.5.0,<3.6\"\n
Once RStudio has loaded, you can execute the following R code that uses the ggplot2
package:
# Load the ggplot2 package\nlibrary(ggplot2)\n\n# Load the built-in 'mtcars' dataset\ndata <- mtcars\n\n# Create a scatterplot of 'mpg' vs 'wt'\nggplot(data, aes(x = wt, y = mpg)) +\n geom_point() +\n labs(x = \"Weight (1000 lbs)\", y = \"Miles per Gallon\") +\n ggtitle(\"Fuel Efficiency vs. Weight\")\n
Note
This example assumes that you have installed RStudio system-wide. We are working on updating RStudio as well as the R interpreter builds on Windows for maximum compatibility with pixi
.
--verbose (-v|vv|vvv)
Increase the verbosity of the output messages, the -v|vv|vvv increases the level of verbosity respectively.--help (-h)
Shows help information, use -h
to get the short version of the help.--version (-V)
: shows the version of pixi that is used.--quiet (-q)
: Decreases the amount of output.--color <COLOR>
: Whether the log needs to be colored [env: PIXI_COLOR=
] [default: auto
] [possible values: always
, never
, auto
]. Pixi also honors the FORCE_COLOR
and NO_COLOR
environment variables. They both take precedence over --color
and PIXI_COLOR
.--no-progress
: Disables the progress bar.[env: PIXI_NO_PROGRESS
] [default: false
]init
","text":"This command is used to create a new project. It initializes a pixi.toml
file and also prepares a .gitignore
to prevent the environment from being added to git
.
It also supports the pyproject.toml
file, if you have a pyproject.toml
file in the directory where you run pixi init
, it appends the pixi data to the pyproject.toml
instead of a new pixi.toml
file.
[PATH]
: Where to place the project (defaults to current path) [default: .
]--channel <CHANNEL> (-c)
: specify a channel that the project uses. Defaults to conda-forge
. (Allowed to be used more than once)--platform <PLATFORM> (-p)
: specify a platform that the project supports. (Allowed to be used more than once)--import <ENV_FILE> (-i)
: Import an existing conda environment file, e.g. environment.yml
.--format <FORMAT>
: Specify the format of the project file, either pyproject
or pixi
. [default: pixi
]Importing an environment.yml
When importing an environment, the pixi.toml
will be created with the dependencies from the environment file. The pixi.lock
will be created when you install the environment. We don't support git+
urls as dependencies for pip packages and for the defaults
channel we use main
, r
and msys2
as the default channels.
pixi init myproject\npixi init ~/myproject\npixi init # Initializes directly in the current directory.\npixi init --channel conda-forge --channel bioconda myproject\npixi init --platform osx-64 --platform linux-64 myproject\npixi init --import environment.yml\npixi init --format pyproject\npixi init --format pixi\n
"},{"location":"reference/cli/#add","title":"add
","text":"Adds dependencies to the manifest file. It will only add if the package with its version constraint is able to work with rest of the dependencies in the project. More info on multi-platform configuration.
If the project manifest is a pyproject.toml
, adding a pypi dependency will add it to the native pyproject project.dependencies
array, or to the native project.optional-dependencies
table if a feature is specified:
pixi add --pypi boto3
would add boto3
to the project.dependencies
arraypixi add --pypi boto3 --feature aws
would add boto3
to the project.dependencies.aws
arrayThese dependencies will be read by pixi as if they had been added to the pixi pypi-dependencies
tables of the default or a named feature.
[SPECS]
: The package(s) to add, space separated. The version constraint is optional.--manifest-path <MANIFEST_PATH>
: the path to manifest file, by default it searches for one in the parent directories.--host
: Specifies a host dependency, important for building a package.--build
: Specifies a build dependency, important for building a package.--pypi
: Specifies a PyPI dependency, not a conda package. Parses dependencies as PEP508 requirements, supporting extras and versions. See configuration for details.--no-install
: Don't install the package to the environment, only add the package to the lock-file.--no-lockfile-update
: Don't update the lock-file, implies the --no-install
flag.--platform <PLATFORM> (-p)
: The platform for which the dependency should be added. (Allowed to be used more than once)--feature <FEATURE> (-f)
: The feature for which the dependency should be added.--editable
: Specifies an editable dependency, only use in combination with --pypi
.pixi add numpy # (1)!\npixi add numpy pandas \"pytorch>=1.8\" # (2)!\npixi add \"numpy>=1.22,<1.24\" # (3)!\npixi add --manifest-path ~/myproject/pixi.toml numpy # (4)!\npixi add --host \"python>=3.9.0\" # (5)!\npixi add --build cmake # (6)!\npixi add --platform osx-64 clang # (7)!\npixi add --no-install numpy # (8)!\npixi add --no-lockfile-update numpy # (9)!\npixi add --feature featurex numpy # (10)!\n\n# Add a pypi dependency\npixi add --pypi requests[security] # (11)!\npixi add --pypi Django==5.1rc1 # (12)!\npixi add --pypi \"boltons>=24.0.0\" --feature lint # (13)!\npixi add --pypi \"boltons @ https://files.pythonhosted.org/packages/46/35/e50d4a115f93e2a3fbf52438435bb2efcf14c11d4fcd6bdcd77a6fc399c9/boltons-24.0.0-py3-none-any.whl\" # (14)!\npixi add --pypi \"exchangelib @ git+https://github.com/ecederstrand/exchangelib\" # (15)!\npixi add --pypi \"project @ file:///absolute/path/to/project\" # (16)!\npixi add --pypi \"project@file:///absolute/path/to/project\" --editable # (17)!\n
numpy
package to the project with the latest available for the solved environment.numpy
package with the version constraint.numpy
package to the project of the manifest file at the given path.python
package as a host dependency. There is currently no different behavior for host dependencies.cmake
package as a build dependency. There is currently no different behavior for build dependencies.clang
package only for the osx-64
platform.numpy
package to the manifest and lockfile, without installing it in an environment.numpy
package to the manifest without updating the lockfile or installing it in the environment.numpy
package in the feature featurex
.requests
package as pypi
dependency with the security
extra.pre-release
version of Django
to the project as a pypi
dependency.boltons
package in the feature lint
as pypi
dependency.boltons
package with the given url
as pypi
dependency.exchangelib
package with the given git
url as pypi
dependency.project
package with the given file
url as pypi
dependency.project
package with the given file
url as an editable
package as pypi
dependency.Tip
If you want to use a non default pinning strategy, you can set it using pixi's configuration.
pixi config set pinning-strategy no-pin --global\n
The default is semver
which will pin the dependencies to the latest major version or minor for v0
versions."},{"location":"reference/cli/#install","title":"install
","text":"Installs an environment based on the manifest file. If there is no pixi.lock
file or it is not up-to-date with the manifest file, it will (re-)generate the lock file.
pixi install
only installs one environment at a time, if you have multiple environments you can select the right one with the --environment
flag. If you don't provide an environment, the default
environment will be installed.
Running pixi install
is not required before running other commands. As all commands interacting with the environment will first run the install
command if the environment is not ready, to make sure you always run in a correct state. E.g. pixi run
, pixi shell
, pixi shell-hook
, pixi add
, pixi remove
to name a few.
--manifest-path <MANIFEST_PATH>
: the path to manifest file, by default it searches for one in the parent directories.--frozen
: install the environment as defined in the lock file, doesn't update pixi.lock
if it isn't up-to-date with manifest file. It can also be controlled by the PIXI_FROZEN
environment variable (example: PIXI_FROZEN=true
).--locked
: only install if the pixi.lock
is up-to-date with the manifest file1. It can also be controlled by the PIXI_LOCKED
environment variable (example: PIXI_LOCKED=true
). Conflicts with --frozen
.--environment <ENVIRONMENT> (-e)
: The environment to install, if none are provided the default environment will be used.pixi install\npixi install --manifest-path ~/myproject/pixi.toml\npixi install --frozen\npixi install --locked\npixi install --environment lint\npixi install -e lint\n
"},{"location":"reference/cli/#update","title":"update
","text":"The update
command checks if there are newer versions of the dependencies and updates the pixi.lock
file and environments accordingly. It will only update the lock file if the dependencies in the manifest file are still compatible with the new versions.
[PACKAGES]...
The packages to update, space separated. If no packages are provided, all packages will be updated.--manifest-path <MANIFEST_PATH>
: the path to manifest file, by default it searches for one in the parent directories.--environment <ENVIRONMENT> (-e)
: The environment to install, if none are provided all the environments are updated.--platform <PLATFORM> (-p)
: The platform for which the dependencies should be updated.--dry-run (-n)
: Only show the changes that would be made, without actually updating the lock file or environment.--no-install
: Don't install the (solve) environment needed for solving pypi-dependencies.--json
: Output the changes in json format.pixi update numpy\npixi update numpy pandas\npixi update --manifest-path ~/myproject/pixi.toml numpy\npixi update --environment lint python\npixi update -e lint -e schema -e docs pre-commit\npixi update --platform osx-arm64 mlx\npixi update -p linux-64 -p osx-64 numpy\npixi update --dry-run\npixi update --no-install boto3\n
"},{"location":"reference/cli/#run","title":"run
","text":"The run
commands first checks if the environment is ready to use. When you didn't run pixi install
the run command will do that for you. The custom tasks defined in the manifest file are also available through the run command.
You cannot run pixi run source setup.bash
as source
is not available in the deno_task_shell
commandos and not an executable.
[TASK]...
The task you want to run in the projects environment, this can also be a normal command. And all arguments after the task will be passed to the task.--manifest-path <MANIFEST_PATH>
: the path to manifest file, by default it searches for one in the parent directories.--frozen
: install the environment as defined in the lock file, doesn't update pixi.lock
if it isn't up-to-date with manifest file. It can also be controlled by the PIXI_FROZEN
environment variable (example: PIXI_FROZEN=true
).--locked
: only install if the pixi.lock
is up-to-date with the manifest file1. It can also be controlled by the PIXI_LOCKED
environment variable (example: PIXI_LOCKED=true
). Conflicts with --frozen
.--environment <ENVIRONMENT> (-e)
: The environment to run the task in, if none are provided the default environment will be used or a selector will be given to select the right environment.--clean-env
: Run the task in a clean environment, this will remove all environment variables of the shell environment except for the ones pixi sets. THIS DOESN't WORK ON Windows
. pixi run python\npixi run cowpy \"Hey pixi user\"\npixi run --manifest-path ~/myproject/pixi.toml python\npixi run --frozen python\npixi run --locked python\n# If you have specified a custom task in the pixi.toml you can run it with run as well\npixi run build\n# Extra arguments will be passed to the tasks command.\npixi run task argument1 argument2\n\n# If you have multiple environments you can select the right one with the --environment flag.\npixi run --environment cuda python\n\n# THIS DOESN'T WORK ON WINDOWS\n# If you want to run a command in a clean environment you can use the --clean-env flag.\n# The PATH should only contain the pixi environment here.\npixi run --clean-env \"echo \\$PATH\"\n
Info
In pixi
the deno_task_shell
is the underlying runner of the run command. Checkout their documentation for the syntax and available commands. This is done so that the run commands can be run across all platforms.
Cross environment tasks
If you're using the depends-on
feature of the tasks
, the tasks will be run in the order you specified them. The depends-on
can be used cross environment, e.g. you have this pixi.toml
:
[tasks]\nstart = { cmd = \"python start.py\", depends-on = [\"build\"] }\n\n[feature.build.tasks]\nbuild = \"cargo build\"\n[feature.build.dependencies]\nrust = \">=1.74\"\n\n[environments]\nbuild = [\"build\"]\n
Then you're able to run the build
from the build
environment and start
from the default environment. By only calling:
pixi run start\n
"},{"location":"reference/cli/#exec","title":"exec
","text":"Runs a command in a temporary environment disconnected from any project. This can be useful to quickly test out a certain package or version.
Temporary environments are cached. If the same command is run again, the same environment will be reused.
Cleaning temporary environmentsCurrently, temporary environments can only be cleaned up manually. Environments for pixi exec
are stored under cached-envs-v0/
in the cache directory. Run pixi info
to find the cache directory.
<COMMAND>
: The command to run.--spec <SPECS> (-s)
: Matchspecs of packages to install. If this is not provided, the package is guessed from the command.--channel <CHANNELS> (-c)
: The channel to install the packages from. If not specified the default channel is used.--force-reinstall
If specified a new environment is always created even if one already exists.pixi exec python\n\n# Add a constraint to the python version\npixi exec -s python=3.9 python\n\n# Run ipython and include the py-rattler package in the environment\npixi exec -s ipython -s py-rattler ipython\n\n# Force reinstall to recreate the environment and get the latest package versions\npixi exec --force-reinstall -s ipython -s py-rattler ipython\n
"},{"location":"reference/cli/#remove","title":"remove
","text":"Removes dependencies from the manifest file.
If the project manifest is a pyproject.toml
, removing a pypi dependency with the --pypi
flag will remove it from either - the native pyproject project.dependencies
array or the native project.optional-dependencies
table (if a feature is specified) - pixi pypi-dependencies
tables of the default or a named feature (if a feature is specified)
<DEPS>...
: List of dependencies you wish to remove from the project.--manifest-path <MANIFEST_PATH>
: the path to manifest file, by default it searches for one in the parent directories.--host
: Specifies a host dependency, important for building a package.--build
: Specifies a build dependency, important for building a package.--pypi
: Specifies a PyPI dependency, not a conda package.--platform <PLATFORM> (-p)
: The platform from which the dependency should be removed.--feature <FEATURE> (-f)
: The feature from which the dependency should be removed.--no-install
: Don't install the environment, only remove the package from the lock-file and manifest.--no-lockfile-update
: Don't update the lock-file, implies the --no-install
flag.pixi remove numpy\npixi remove numpy pandas pytorch\npixi remove --manifest-path ~/myproject/pixi.toml numpy\npixi remove --host python\npixi remove --build cmake\npixi remove --pypi requests\npixi remove --platform osx-64 --build clang\npixi remove --feature featurex clang\npixi remove --feature featurex --platform osx-64 clang\npixi remove --feature featurex --platform osx-64 --build clang\npixi remove --no-install numpy\n
"},{"location":"reference/cli/#task","title":"task
","text":"If you want to make a shorthand for a specific command you can add a task for it.
"},{"location":"reference/cli/#options_7","title":"Options","text":"--manifest-path <MANIFEST_PATH>
: the path to manifest file, by default it searches for one in the parent directories.task add
","text":"Add a task to the manifest file, use --depends-on
to add tasks you want to run before this task, e.g. build before an execute task.
<NAME>
: The name of the task.<COMMAND>
: The command to run. This can be more than one word.Info
If you are using $
for env variables they will be resolved before adding them to the task. If you want to use $
in the task you need to escape it with a \\
, e.g. echo \\$HOME
.
--platform <PLATFORM> (-p)
: the platform for which this task should be added.--feature <FEATURE> (-f)
: the feature for which the task is added, if non provided the default tasks will be added.--depends-on <DEPENDS_ON>
: the task it depends on to be run before the one your adding.--cwd <CWD>
: the working directory for the task relative to the root of the project.--env <ENV>
: the environment variables as key=value
pairs for the task, can be used multiple times, e.g. --env \"VAR1=VALUE1\" --env \"VAR2=VALUE2\"
.--description <DESCRIPTION>
: a description of the task.pixi task add cow cowpy \"Hello User\"\npixi task add tls ls --cwd tests\npixi task add test cargo t --depends-on build\npixi task add build-osx \"METAL=1 cargo build\" --platform osx-64\npixi task add train python train.py --feature cuda\npixi task add publish-pypi \"hatch publish --yes --repo main\" --feature build --env HATCH_CONFIG=config/hatch.toml --description \"Publish the package to pypi\"\n
This adds the following to the manifest file:
[tasks]\ncow = \"cowpy \\\"Hello User\\\"\"\ntls = { cmd = \"ls\", cwd = \"tests\" }\ntest = { cmd = \"cargo t\", depends-on = [\"build\"] }\n\n[target.osx-64.tasks]\nbuild-osx = \"METAL=1 cargo build\"\n\n[feature.cuda.tasks]\ntrain = \"python train.py\"\n\n[feature.build.tasks]\npublish-pypi = { cmd = \"hatch publish --yes --repo main\", env = { HATCH_CONFIG = \"config/hatch.toml\" }, description = \"Publish the package to pypi\" }\n
Which you can then run with the run
command:
pixi run cow\n# Extra arguments will be passed to the tasks command.\npixi run test --test test1\n
"},{"location":"reference/cli/#task-remove","title":"task remove
","text":"Remove the task from the manifest file
"},{"location":"reference/cli/#arguments_7","title":"Arguments","text":"<NAMES>
: The names of the tasks, space separated.--platform <PLATFORM> (-p)
: the platform for which this task is removed.--feature <FEATURE> (-f)
: the feature for which the task is removed.pixi task remove cow\npixi task remove --platform linux-64 test\npixi task remove --feature cuda task\n
"},{"location":"reference/cli/#task-alias","title":"task alias
","text":"Create an alias for a task.
"},{"location":"reference/cli/#arguments_8","title":"Arguments","text":"<ALIAS>
: The alias name<DEPENDS_ON>
: The names of the tasks you want to execute on this alias, order counts, first one runs first.--platform <PLATFORM> (-p)
: the platform for which this alias is created.pixi task alias test-all test-py test-cpp test-rust\npixi task alias --platform linux-64 test test-linux\npixi task alias moo cow\n
"},{"location":"reference/cli/#task-list","title":"task list
","text":"List all tasks in the project.
"},{"location":"reference/cli/#options_11","title":"Options","text":"--environment
(-e
): the environment's tasks list, if non is provided the default tasks will be listed.--summary
(-s
): list the tasks per environment.pixi task list\npixi task list --environment cuda\npixi task list --summary\n
"},{"location":"reference/cli/#list","title":"list
","text":"List project's packages. Highlighted packages are explicit dependencies.
"},{"location":"reference/cli/#options_12","title":"Options","text":"--platform <PLATFORM> (-p)
: The platform to list packages for. Defaults to the current platform--json
: Whether to output in json format.--json-pretty
: Whether to output in pretty json format--sort-by <SORT_BY>
: Sorting strategy [default: name] [possible values: size, name, type]--explicit (-x)
: Only list the packages that are explicitly added to the manifest file.--manifest-path <MANIFEST_PATH>
: The path to manifest file, by default it searches for one in the parent directories.--environment (-e)
: The environment's packages to list, if non is provided the default environment's packages will be listed.--frozen
: install the environment as defined in the lock file, doesn't update pixi.lock
if it isn't up-to-date with manifest file. It can also be controlled by the PIXI_FROZEN
environment variable (example: PIXI_FROZEN=true
).--locked
: Only install if the pixi.lock
is up-to-date with the manifest file1. It can also be controlled by the PIXI_LOCKED
environment variable (example: PIXI_LOCKED=true
). Conflicts with --frozen
.--no-install
: Don't install the environment for pypi solving, only update the lock-file if it can solve without installing. (Implied by --frozen
and --locked
)pixi list\npixi list --json-pretty\npixi list --explicit\npixi list --sort-by size\npixi list --platform win-64\npixi list --environment cuda\npixi list --frozen\npixi list --locked\npixi list --no-install\n
Output will look like this, where python
will be green as it is the package that was explicitly added to the manifest file:
\u279c pixi list\n Package Version Build Size Kind Source\n _libgcc_mutex 0.1 conda_forge 2.5 KiB conda _libgcc_mutex-0.1-conda_forge.tar.bz2\n _openmp_mutex 4.5 2_gnu 23.1 KiB conda _openmp_mutex-4.5-2_gnu.tar.bz2\n bzip2 1.0.8 hd590300_5 248.3 KiB conda bzip2-1.0.8-hd590300_5.conda\n ca-certificates 2023.11.17 hbcca054_0 150.5 KiB conda ca-certificates-2023.11.17-hbcca054_0.conda\n ld_impl_linux-64 2.40 h41732ed_0 688.2 KiB conda ld_impl_linux-64-2.40-h41732ed_0.conda\n libexpat 2.5.0 hcb278e6_1 76.2 KiB conda libexpat-2.5.0-hcb278e6_1.conda\n libffi 3.4.2 h7f98852_5 56.9 KiB conda libffi-3.4.2-h7f98852_5.tar.bz2\n libgcc-ng 13.2.0 h807b86a_4 755.7 KiB conda libgcc-ng-13.2.0-h807b86a_4.conda\n libgomp 13.2.0 h807b86a_4 412.2 KiB conda libgomp-13.2.0-h807b86a_4.conda\n libnsl 2.0.1 hd590300_0 32.6 KiB conda libnsl-2.0.1-hd590300_0.conda\n libsqlite 3.44.2 h2797004_0 826 KiB conda libsqlite-3.44.2-h2797004_0.conda\n libuuid 2.38.1 h0b41bf4_0 32.8 KiB conda libuuid-2.38.1-h0b41bf4_0.conda\n libxcrypt 4.4.36 hd590300_1 98 KiB conda libxcrypt-4.4.36-hd590300_1.conda\n libzlib 1.2.13 hd590300_5 60.1 KiB conda libzlib-1.2.13-hd590300_5.conda\n ncurses 6.4 h59595ed_2 863.7 KiB conda ncurses-6.4-h59595ed_2.conda\n openssl 3.2.0 hd590300_1 2.7 MiB conda openssl-3.2.0-hd590300_1.conda\n python 3.12.1 hab00c5b_1_cpython 30.8 MiB conda python-3.12.1-hab00c5b_1_cpython.conda\n readline 8.2 h8228510_1 274.9 KiB conda readline-8.2-h8228510_1.conda\n tk 8.6.13 noxft_h4845f30_101 3.2 MiB conda tk-8.6.13-noxft_h4845f30_101.conda\n tzdata 2023d h0c530f3_0 116.8 KiB conda tzdata-2023d-h0c530f3_0.conda\n xz 5.2.6 h166bdaf_0 408.6 KiB conda xz-5.2.6-h166bdaf_0.tar.bz2\n
"},{"location":"reference/cli/#tree","title":"tree
","text":"Display the project's packages in a tree. Highlighted packages are those specified in the manifest.
The package tree can also be inverted (-i
), to see which packages require a specific dependencies.
REGEX
optional regex of which dependencies to filter the tree to, or which dependencies to start with when inverting the tree.--invert (-i)
: Invert the dependency tree, that is given a REGEX
pattern that matches some packages, show all the packages that depend on those.--platform <PLATFORM> (-p)
: The platform to list packages for. Defaults to the current platform--manifest-path <MANIFEST_PATH>
: The path to manifest file, by default it searches for one in the parent directories.--environment (-e)
: The environment's packages to list, if non is provided the default environment's packages will be listed.--frozen
: install the environment as defined in the lock file, doesn't update pixi.lock
if it isn't up-to-date with manifest file. It can also be controlled by the PIXI_FROZEN
environment variable (example: PIXI_FROZEN=true
).--locked
: Only install if the pixi.lock
is up-to-date with the manifest file1. It can also be controlled by the PIXI_LOCKED
environment variable (example: PIXI_LOCKED=true
). Conflicts with --frozen
.--no-install
: Don't install the environment for pypi solving, only update the lock-file if it can solve without installing. (Implied by --frozen
and --locked
)pixi tree\npixi tree pre-commit\npixi tree -i yaml\npixi tree --environment docs\npixi tree --platform win-64\n
Warning
Use -v
to show which pypi
packages are not yet parsed correctly. The extras
and markers
parsing is still under development.
Output will look like this, where direct packages in the manifest file will be green. Once a package has been displayed once, the tree won't continue to recurse through its dependencies (compare the first time python
appears, vs the rest), and it will instead be marked with a star (*)
.
Version numbers are colored by the package type, yellow for Conda packages and blue for PyPI.
\u279c pixi tree\n\u251c\u2500\u2500 pre-commit v3.3.3\n\u2502 \u251c\u2500\u2500 cfgv v3.3.1\n\u2502 \u2502 \u2514\u2500\u2500 python v3.12.2\n\u2502 \u2502 \u251c\u2500\u2500 bzip2 v1.0.8\n\u2502 \u2502 \u251c\u2500\u2500 libexpat v2.6.2\n\u2502 \u2502 \u251c\u2500\u2500 libffi v3.4.2\n\u2502 \u2502 \u251c\u2500\u2500 libsqlite v3.45.2\n\u2502 \u2502 \u2502 \u2514\u2500\u2500 libzlib v1.2.13\n\u2502 \u2502 \u251c\u2500\u2500 libzlib v1.2.13 (*)\n\u2502 \u2502 \u251c\u2500\u2500 ncurses v6.4.20240210\n\u2502 \u2502 \u251c\u2500\u2500 openssl v3.2.1\n\u2502 \u2502 \u251c\u2500\u2500 readline v8.2\n\u2502 \u2502 \u2502 \u2514\u2500\u2500 ncurses v6.4.20240210 (*)\n\u2502 \u2502 \u251c\u2500\u2500 tk v8.6.13\n\u2502 \u2502 \u2502 \u2514\u2500\u2500 libzlib v1.2.13 (*)\n\u2502 \u2502 \u2514\u2500\u2500 xz v5.2.6\n\u2502 \u251c\u2500\u2500 identify v2.5.35\n\u2502 \u2502 \u2514\u2500\u2500 python v3.12.2 (*)\n...\n\u2514\u2500\u2500 tbump v6.9.0\n...\n \u2514\u2500\u2500 tomlkit v0.12.4\n \u2514\u2500\u2500 python v3.12.2 (*)\n
A regex pattern can be specified to filter the tree to just those that show a specific direct, or transitive dependency:
\u279c pixi tree pre-commit\n\u2514\u2500\u2500 pre-commit v3.3.3\n \u251c\u2500\u2500 virtualenv v20.25.1\n \u2502 \u251c\u2500\u2500 filelock v3.13.1\n \u2502 \u2502 \u2514\u2500\u2500 python v3.12.2\n \u2502 \u2502 \u251c\u2500\u2500 libexpat v2.6.2\n \u2502 \u2502 \u251c\u2500\u2500 readline v8.2\n \u2502 \u2502 \u2502 \u2514\u2500\u2500 ncurses v6.4.20240210\n \u2502 \u2502 \u251c\u2500\u2500 libsqlite v3.45.2\n \u2502 \u2502 \u2502 \u2514\u2500\u2500 libzlib v1.2.13\n \u2502 \u2502 \u251c\u2500\u2500 bzip2 v1.0.8\n \u2502 \u2502 \u251c\u2500\u2500 libzlib v1.2.13 (*)\n \u2502 \u2502 \u251c\u2500\u2500 libffi v3.4.2\n \u2502 \u2502 \u251c\u2500\u2500 tk v8.6.13\n \u2502 \u2502 \u2502 \u2514\u2500\u2500 libzlib v1.2.13 (*)\n \u2502 \u2502 \u251c\u2500\u2500 xz v5.2.6\n \u2502 \u2502 \u251c\u2500\u2500 ncurses v6.4.20240210 (*)\n \u2502 \u2502 \u2514\u2500\u2500 openssl v3.2.1\n \u2502 \u251c\u2500\u2500 platformdirs v4.2.0\n \u2502 \u2502 \u2514\u2500\u2500 python v3.12.2 (*)\n \u2502 \u251c\u2500\u2500 distlib v0.3.8\n \u2502 \u2502 \u2514\u2500\u2500 python v3.12.2 (*)\n \u2502 \u2514\u2500\u2500 python v3.12.2 (*)\n \u251c\u2500\u2500 pyyaml v6.0.1\n...\n
Additionally, the tree can be inverted, and it can show which packages depend on a regex pattern. The packages specified in the manifest will also be highlighted (in this case cffconvert
and pre-commit
would be).
\u279c pixi tree -i yaml\n\nruamel.yaml v0.18.6\n\u251c\u2500\u2500 pykwalify v1.8.0\n\u2502 \u2514\u2500\u2500 cffconvert v2.0.0\n\u2514\u2500\u2500 cffconvert v2.0.0\n\npyyaml v6.0.1\n\u2514\u2500\u2500 pre-commit v3.3.3\n\nruamel.yaml.clib v0.2.8\n\u2514\u2500\u2500 ruamel.yaml v0.18.6\n \u251c\u2500\u2500 pykwalify v1.8.0\n \u2502 \u2514\u2500\u2500 cffconvert v2.0.0\n \u2514\u2500\u2500 cffconvert v2.0.0\n\nyaml v0.2.5\n\u2514\u2500\u2500 pyyaml v6.0.1\n \u2514\u2500\u2500 pre-commit v3.3.3\n
"},{"location":"reference/cli/#shell","title":"shell
","text":"This command starts a new shell in the project's environment. To exit the pixi shell, simply run exit
.
--change-ps1 <true or false>
: When set to false, the (pixi)
prefix in the shell prompt is removed (default: true
). The default behavior can be configured globally.--manifest-path <MANIFEST_PATH>
: the path to manifest file, by default it searches for one in the parent directories.--frozen
: install the environment as defined in the lock file, doesn't update pixi.lock
if it isn't up-to-date with manifest file. It can also be controlled by the PIXI_FROZEN
environment variable (example: PIXI_FROZEN=true
).--locked
: only install if the pixi.lock
is up-to-date with the manifest file1. It can also be controlled by the PIXI_LOCKED
environment variable (example: PIXI_LOCKED=true
). Conflicts with --frozen
.--environment <ENVIRONMENT> (-e)
: The environment to activate the shell in, if none are provided the default environment will be used or a selector will be given to select the right environment.pixi shell\nexit\npixi shell --manifest-path ~/myproject/pixi.toml\nexit\npixi shell --frozen\nexit\npixi shell --locked\nexit\npixi shell --environment cuda\nexit\n
"},{"location":"reference/cli/#shell-hook","title":"shell-hook
","text":"This command prints the activation script of an environment.
"},{"location":"reference/cli/#options_15","title":"Options","text":"--shell <SHELL> (-s)
: The shell for which the activation script should be printed. Defaults to the current shell. Currently supported variants: [bash
, zsh
, xonsh
, cmd
, powershell
, fish
, nushell
]--manifest-path
: the path to manifest file, by default it searches for one in the parent directories.--frozen
: install the environment as defined in the lock file, doesn't update pixi.lock
if it isn't up-to-date with manifest file. It can also be controlled by the PIXI_FROZEN
environment variable (example: PIXI_FROZEN=true
).--locked
: only install if the pixi.lock
is up-to-date with the manifest file1. It can also be controlled by the PIXI_LOCKED
environment variable (example: PIXI_LOCKED=true
). Conflicts with --frozen
.--environment <ENVIRONMENT> (-e)
: The environment to activate, if none are provided the default environment will be used or a selector will be given to select the right environment.--json
: Print all environment variables that are exported by running the activation script as JSON. When specifying this option, --shell
is ignored.pixi shell-hook\npixi shell-hook --shell bash\npixi shell-hook --shell zsh\npixi shell-hook -s powershell\npixi shell-hook --manifest-path ~/myproject/pixi.toml\npixi shell-hook --frozen\npixi shell-hook --locked\npixi shell-hook --environment cuda\npixi shell-hook --json\n
Example use-case, when you want to get rid of the pixi
executable in a Docker container.
pixi shell-hook --shell bash > /etc/profile.d/pixi.sh\nrm ~/.pixi/bin/pixi # Now the environment will be activated without the need for the pixi executable.\n
"},{"location":"reference/cli/#search","title":"search
","text":"Search a package, output will list the latest version of the package.
"},{"location":"reference/cli/#arguments_10","title":"Arguments","text":"<PACKAGE>
: Name of package to search, it's possible to use wildcards (*
).--manifest-path <MANIFEST_PATH>
: the path to manifest file, by default it searches for one in the parent directories.--channel <CHANNEL> (-c)
: specify a channel that the project uses. Defaults to conda-forge
. (Allowed to be used more than once)--limit <LIMIT> (-l)
: optionally limit the number of search results--platform <PLATFORM> (-p)
: specify a platform that you want to search for. (default: current platform)pixi search pixi\npixi search --limit 30 \"py*\"\n# search in a different channel and for a specific platform\npixi search -c robostack --platform linux-64 \"plotjuggler*\"\n
"},{"location":"reference/cli/#self-update","title":"self-update
","text":"Update pixi to the latest version or a specific version. If the pixi binary is not found in the default location (e.g. ~/.pixi/bin/pixi
), pixi won't update to prevent breaking the current installation (Homebrew, etc.). The behaviour can be overridden with the --force
flag
--version <VERSION>
: The desired version (to downgrade or upgrade to). Update to the latest version if not specified.--force
: Force the update even if the pixi binary is not found in the default location.pixi self-update\npixi self-update --version 0.13.0\npixi self-update --force\n
"},{"location":"reference/cli/#info","title":"info
","text":"Shows helpful information about the pixi installation, cache directories, disk usage, and more. More information here.
"},{"location":"reference/cli/#options_18","title":"Options","text":"--manifest-path <MANIFEST_PATH>
: the path to manifest file, by default it searches for one in the parent directories.--extended
: extend the information with more slow queries to the system, like directory sizes.--json
: Get a machine-readable version of the information as output.pixi info\npixi info --json --extended\n
"},{"location":"reference/cli/#clean","title":"clean
","text":"Clean the parts of your system which are touched by pixi. Defaults to cleaning the environments and task cache. Use the cache
subcommand to clean the cache
--manifest-path <MANIFEST_PATH>
: the path to manifest file, by default it searches for one in the parent directories.--environment <ENVIRONMENT> (-e)
: The environment to clean, if none are provided all environments will be removed.pixi clean\n
"},{"location":"reference/cli/#clean-cache","title":"clean cache
","text":"Clean the pixi cache on your system.
"},{"location":"reference/cli/#options_20","title":"Options","text":"--pypi
: Clean the pypi cache.--conda
: Clean the conda cache.--yes
: Skip the confirmation prompt.pixi clean cache # clean all pixi caches\npixi clean cache --pypi # clean only the pypi cache\npixi clean cache --conda # clean only the conda cache\npixi clean cache --yes # skip the confirmation prompt\n
"},{"location":"reference/cli/#upload","title":"upload
","text":"Upload a package to a prefix.dev channel
"},{"location":"reference/cli/#arguments_11","title":"Arguments","text":"<HOST>
: The host + channel to upload to.<PACKAGE_FILE>
: The package file to upload.pixi upload https://prefix.dev/api/v1/upload/my_channel my_package.conda\n
"},{"location":"reference/cli/#auth","title":"auth
","text":"This command is used to authenticate the user's access to remote hosts such as prefix.dev
or anaconda.org
for private channels.
auth login
","text":"Store authentication information for given host.
Tip
The host is real hostname not a channel.
"},{"location":"reference/cli/#arguments_12","title":"Arguments","text":"<HOST>
: The host to authenticate with.--token <TOKEN>
: The token to use for authentication with prefix.dev.--username <USERNAME>
: The username to use for basic HTTP authentication--password <PASSWORD>
: The password to use for basic HTTP authentication.--conda-token <CONDA_TOKEN>
: The token to use on anaconda.org
/ quetz
authentication.pixi auth login repo.prefix.dev --token pfx_JQEV-m_2bdz-D8NSyRSaAndHANx0qHjq7f2iD\npixi auth login anaconda.org --conda-token ABCDEFGHIJKLMNOP\npixi auth login https://myquetz.server --username john --password xxxxxx\n
"},{"location":"reference/cli/#auth-logout","title":"auth logout
","text":"Remove authentication information for a given host.
"},{"location":"reference/cli/#arguments_13","title":"Arguments","text":"<HOST>
: The host to authenticate with.pixi auth logout <HOST>\npixi auth logout repo.prefix.dev\npixi auth logout anaconda.org\n
"},{"location":"reference/cli/#config","title":"config
","text":"Use this command to manage the configuration.
"},{"location":"reference/cli/#options_22","title":"Options","text":"--system (-s)
: Specify management scope to system configuration.--global (-g)
: Specify management scope to global configuration.--local (-l)
: Specify management scope to local configuration.Checkout the pixi configuration for more information about the locations.
"},{"location":"reference/cli/#config-edit","title":"config edit
","text":"Edit the configuration file in the default editor.
pixi config edit --system\npixi config edit --local\npixi config edit -g\n
"},{"location":"reference/cli/#config-list","title":"config list
","text":"List the configuration
"},{"location":"reference/cli/#arguments_14","title":"Arguments","text":"[KEY]
: The key to list the value of. (all if not provided)--json
: Output the configuration in JSON format.pixi config list default-channels\npixi config list --json\npixi config list --system\npixi config list -g\n
"},{"location":"reference/cli/#config-prepend","title":"config prepend
","text":"Prepend a value to a list configuration key.
"},{"location":"reference/cli/#arguments_15","title":"Arguments","text":"<KEY>
: The key to prepend the value to.<VALUE>
: The value to prepend.pixi config prepend default-channels conda-forge\n
"},{"location":"reference/cli/#config-append","title":"config append
","text":"Append a value to a list configuration key.
"},{"location":"reference/cli/#arguments_16","title":"Arguments","text":"<KEY>
: The key to append the value to.<VALUE>
: The value to append.pixi config append default-channels robostack\npixi config append default-channels bioconda --global\n
"},{"location":"reference/cli/#config-set","title":"config set
","text":"Set a configuration key to a value.
"},{"location":"reference/cli/#arguments_17","title":"Arguments","text":"<KEY>
: The key to set the value of.[VALUE]
: The value to set. (if not provided, the key will be removed)pixi config set default-channels '[\"conda-forge\", \"bioconda\"]'\npixi config set --global mirrors '{\"https://conda.anaconda.org/\": [\"https://prefix.dev/conda-forge\"]}'\npixi config set repodata-config.disable-zstd true --system\npixi config set --global detached-environments \"/opt/pixi/envs\"\npixi config set detached-environments false\n
"},{"location":"reference/cli/#config-unset","title":"config unset
","text":"Unset a configuration key.
"},{"location":"reference/cli/#arguments_18","title":"Arguments","text":"<KEY>
: The key to unset.pixi config unset default-channels\npixi config unset --global mirrors\npixi config unset repodata-config.disable-zstd --system\n
"},{"location":"reference/cli/#global","title":"global
","text":"Global is the main entry point for the part of pixi that executes on the global(system) level.
Tip
Binaries and environments installed globally are stored in ~/.pixi
by default, this can be changed by setting the PIXI_HOME
environment variable.
global install
","text":"This command installs package(s) into its own environment and adds the binary to PATH
, allowing you to access it anywhere on your system without activating the environment.
1.<PACKAGE>
: The package(s) to install, this can also be a version constraint.
--channel <CHANNEL> (-c)
: specify a channel that the project uses. Defaults to conda-forge
. (Allowed to be used more than once)--platform <PLATFORM> (-p)
: specify a platform that you want to install the package for. (default: current platform)pixi global install ruff\n# multiple packages can be installed at once\npixi global install starship rattler-build\n# specify the channel(s)\npixi global install --channel conda-forge --channel bioconda trackplot\n# Or in a more concise form\npixi global install -c conda-forge -c bioconda trackplot\n\n# Support full conda matchspec\npixi global install python=3.9.*\npixi global install \"python [version='3.11.0', build_number=1]\"\npixi global install \"python [version='3.11.0', build=he550d4f_1_cpython]\"\npixi global install python=3.11.0=h10a6764_1_cpython\n\n# Install for a specific platform, only useful on osx-arm64\npixi global install --platform osx-64 ruff\n
Tip
Running osx-64
on Apple Silicon will install the Intel binary but run it using Rosetta
pixi global install --platform osx-64 ruff\n
After using global install, you can use the package you installed anywhere on your system.
"},{"location":"reference/cli/#global-list","title":"global list
","text":"This command shows the current installed global environments including what binaries come with it. A global installed package/environment can possibly contain multiple binaries and they will be listed out in the command output. Here is an example of a few installed packages:
> pixi global list\nGlobal install location: /home/hanabi/.pixi\n\u251c\u2500\u2500 bat 0.24.0\n| \u2514\u2500 exec: bat\n\u251c\u2500\u2500 conda-smithy 3.31.1\n| \u2514\u2500 exec: feedstocks, conda-smithy\n\u251c\u2500\u2500 rattler-build 0.13.0\n| \u2514\u2500 exec: rattler-build\n\u251c\u2500\u2500 ripgrep 14.1.0\n| \u2514\u2500 exec: rg\n\u2514\u2500\u2500 uv 0.1.17\n \u2514\u2500 exec: uv\n
"},{"location":"reference/cli/#global-upgrade","title":"global upgrade
","text":"This command upgrades a globally installed package (to the latest version by default).
"},{"location":"reference/cli/#arguments_20","title":"Arguments","text":"<PACKAGE>
: The package to upgrade.--channel <CHANNEL> (-c)
: specify a channel that the project uses. Defaults to conda-forge
. Note the channel the package was installed from will be always used for upgrade. (Allowed to be used more than once)--platform <PLATFORM> (-p)
: specify a platform that you want to upgrade the package for. (default: current platform)pixi global upgrade ruff\npixi global upgrade --channel conda-forge --channel bioconda trackplot\n# Or in a more concise form\npixi global upgrade -c conda-forge -c bioconda trackplot\n\n# Conda matchspec is supported\n# You can specify the version to upgrade to when you don't want the latest version\n# or you can even use it to downgrade a globally installed package\npixi global upgrade python=3.10\n
"},{"location":"reference/cli/#global-upgrade-all","title":"global upgrade-all
","text":"This command upgrades all globally installed packages to their latest version.
"},{"location":"reference/cli/#options_26","title":"Options","text":"--channel <CHANNEL> (-c)
: specify a channel that the project uses. Defaults to conda-forge
. Note the channel the package was installed from will be always used for upgrade. (Allowed to be used more than once)pixi global upgrade-all\npixi global upgrade-all --channel conda-forge --channel bioconda\n# Or in a more concise form\npixi global upgrade-all -c conda-forge -c bioconda trackplot\n
"},{"location":"reference/cli/#global-remove","title":"global remove
","text":"Removes a package previously installed into a globally accessible location via pixi global install
Use pixi global info
to find out what the package name is that belongs to the tool you want to remove.
<PACKAGE>
: The package(s) to remove.pixi global remove pre-commit\n\n# multiple packages can be removed at once\npixi global remove pre-commit starship\n
"},{"location":"reference/cli/#project","title":"project
","text":"This subcommand allows you to modify the project configuration through the command line interface.
"},{"location":"reference/cli/#options_27","title":"Options","text":"--manifest-path <MANIFEST_PATH>
: the path to manifest file, by default it searches for one in the parent directories.project channel add
","text":"Add channels to the channel list in the project configuration. When you add channels, the channels are tested for existence, added to the lock file and the environment is reinstalled.
"},{"location":"reference/cli/#arguments_22","title":"Arguments","text":"<CHANNEL>
: The channels to add, name or URL.--no-install
: do not update the environment, only add changed packages to the lock-file.--feature <FEATURE> (-f)
: The feature for which the channel is added.pixi project channel add robostack\npixi project channel add bioconda conda-forge robostack\npixi project channel add file:///home/user/local_channel\npixi project channel add https://repo.prefix.dev/conda-forge\npixi project channel add --no-install robostack\npixi project channel add --feature cuda nvidia\n
"},{"location":"reference/cli/#project-channel-list","title":"project channel list
","text":"List the channels in the manifest file
"},{"location":"reference/cli/#options_29","title":"Options","text":"urls
: show the urls of the channels instead of the names.$ pixi project channel list\nEnvironment: default\n- conda-forge\n\n$ pixi project channel list --urls\nEnvironment: default\n- https://conda.anaconda.org/conda-forge/\n
"},{"location":"reference/cli/#project-channel-remove","title":"project channel remove
","text":"List the channels in the manifest file
"},{"location":"reference/cli/#arguments_23","title":"Arguments","text":"<CHANNEL>...
: The channels to remove, name(s) or URL(s).--no-install
: do not update the environment, only add changed packages to the lock-file.--feature <FEATURE> (-f)
: The feature for which the channel is removed.pixi project channel remove conda-forge\npixi project channel remove https://conda.anaconda.org/conda-forge/\npixi project channel remove --no-install conda-forge\npixi project channel remove --feature cuda nvidia\n
"},{"location":"reference/cli/#project-description-get","title":"project description get
","text":"Get the project description.
$ pixi project description get\nPackage management made easy!\n
"},{"location":"reference/cli/#project-description-set","title":"project description set
","text":"Set the project description.
"},{"location":"reference/cli/#arguments_24","title":"Arguments","text":"<DESCRIPTION>
: The description to set.pixi project description set \"my new description\"\n
"},{"location":"reference/cli/#project-environment-add","title":"project environment add
","text":"Add an environment to the manifest file.
"},{"location":"reference/cli/#arguments_25","title":"Arguments","text":"<NAME>
: The name of the environment to add.-f, --feature <FEATURES>
: Features to add to the environment.--solve-group <SOLVE_GROUP>
: The solve-group to add the environment to.--no-default-feature
: Don't include the default feature in the environment.--force
: Update the manifest even if the environment already exists.pixi project environment add env1 --feature feature1 --feature feature2\npixi project environment add env2 -f feature1 --solve-group test\npixi project environment add env3 -f feature1 --no-default-feature\npixi project environment add env3 -f feature1 --force\n
"},{"location":"reference/cli/#project-environment-remove","title":"project environment remove
","text":"Remove an environment from the manifest file.
"},{"location":"reference/cli/#arguments_26","title":"Arguments","text":"<NAME>
: The name of the environment to remove.pixi project environment remove env1\n
"},{"location":"reference/cli/#project-environment-list","title":"project environment list
","text":"List the environments in the manifest file.
pixi project environment list\n
"},{"location":"reference/cli/#project-export-conda_environment","title":"project export conda_environment
","text":"Exports a conda environment.yml
file. The file can be used to create a conda environment using conda/mamba:
pixi project export conda-environment environment.yml\nmamba create --name <env> --file environment.yml\n
"},{"location":"reference/cli/#arguments_27","title":"Arguments","text":"<OUTPUT_PATH>
: Optional path to render environment.yml to. Otherwise it will be printed to standard out.--environment <ENVIRONMENT> (-e)
: Environment to render.--platform <PLATFORM> (-p)
: The platform to render.pixi project export conda-environment --environment lint\npixi project export conda-environment --platform linux-64 environment.linux-64.yml\n
"},{"location":"reference/cli/#project-export-conda_explicit_spec","title":"project export conda_explicit_spec
","text":"Render a platform-specific conda explicit specification file for an environment. The file can be then used to create a conda environment using conda/mamba:
mamba create --name <env> --file <explicit spec file>\n
As the explicit specification file format does not support pypi-dependencies, use the --ignore-pypi-errors
option to ignore those dependencies.
<OUTPUT_DIR>
: Output directory for rendered explicit environment spec files.--environment <ENVIRONMENT> (-e)
: Environment to render. Can be repeated for multiple envs. Defaults to all environments.--platform <PLATFORM> (-p)
: The platform to render. Can be repeated for multiple platforms. Defaults to all platforms available for selected environments.--ignore-pypi-errors
: PyPI dependencies are not supported in the conda explicit spec file. This flag allows creating the spec file even if PyPI dependencies are present.pixi project export conda_explicit_spec output\npixi project export conda_explicit_spec -e default -e test -p linux-64 output\n
"},{"location":"reference/cli/#project-platform-add","title":"project platform add
","text":"Adds a platform(s) to the manifest file and updates the lock file.
"},{"location":"reference/cli/#arguments_29","title":"Arguments","text":"<PLATFORM>...
: The platforms to add.--no-install
: do not update the environment, only add changed packages to the lock-file.--feature <FEATURE> (-f)
: The feature for which the platform will be added.pixi project platform add win-64\npixi project platform add --feature test win-64\n
"},{"location":"reference/cli/#project-platform-list","title":"project platform list
","text":"List the platforms in the manifest file.
$ pixi project platform list\nosx-64\nlinux-64\nwin-64\nosx-arm64\n
"},{"location":"reference/cli/#project-platform-remove","title":"project platform remove
","text":"Remove platform(s) from the manifest file and updates the lock file.
"},{"location":"reference/cli/#arguments_30","title":"Arguments","text":"<PLATFORM>...
: The platforms to remove.--no-install
: do not update the environment, only add changed packages to the lock-file.--feature <FEATURE> (-f)
: The feature for which the platform will be removed.pixi project platform remove win-64\npixi project platform remove --feature test win-64\n
"},{"location":"reference/cli/#project-version-get","title":"project version get
","text":"Get the project version.
$ pixi project version get\n0.11.0\n
"},{"location":"reference/cli/#project-version-set","title":"project version set
","text":"Set the project version.
"},{"location":"reference/cli/#arguments_31","title":"Arguments","text":"<VERSION>
: The version to set.pixi project version set \"0.13.0\"\n
"},{"location":"reference/cli/#project-version-majorminorpatch","title":"project version {major|minor|patch}
","text":"Bump the project version to {MAJOR|MINOR|PATCH}.
pixi project version major\npixi project version minor\npixi project version patch\n
An up-to-date lock file means that the dependencies in the lock file are allowed by the dependencies in the manifest file. For example
python = \">= 3.11\"
is up-to-date with a name: python, version: 3.11.0
in the pixi.lock
.python = \">= 3.12\"
is not up-to-date with a name: python, version: 3.11.0
in the pixi.lock
.Being up-to-date does not mean that the lock file holds the latest version available on the channel for the given dependency.\u00a0\u21a9\u21a9\u21a9\u21a9\u21a9\u21a9
Apart from the project specific configuration pixi supports configuration options which are not required for the project to work but are local to the machine. The configuration is loaded in the following order:
LinuxmacOSWindows Priority Location Comments 1/etc/pixi/config.toml
System-wide configuration 2 $XDG_CONFIG_HOME/pixi/config.toml
XDG compliant user-specific configuration 3 $HOME/.config/pixi/config.toml
User-specific configuration 4 $PIXI_HOME/config.toml
Global configuration in the user home directory. PIXI_HOME
defaults to ~/.pixi
5 your_project/.pixi/config.toml
Project-specific configuration 6 Command line arguments (--tls-no-verify
, --change-ps1=false
, etc.) Configuration via command line arguments Priority Location Comments 1 /etc/pixi/config.toml
System-wide configuration 2 $XDG_CONFIG_HOME/pixi/config.toml
XDG compliant user-specific configuration 3 $HOME/Library/Application Support/pixi/config.toml
User-specific configuration 4 $PIXI_HOME/config.toml
Global configuration in the user home directory. PIXI_HOME
defaults to ~/.pixi
5 your_project/.pixi/config.toml
Project-specific configuration 6 Command line arguments (--tls-no-verify
, --change-ps1=false
, etc.) Configuration via command line arguments Priority Location Comments 1 C:\\ProgramData\\pixi\\config.toml
System-wide configuration 2 %APPDATA%\\pixi\\config.toml
User-specific configuration 3 $PIXI_HOME\\config.toml
Global configuration in the user home directory. PIXI_HOME
defaults to %USERPROFILE%/.pixi
4 your_project\\.pixi\\config.toml
Project-specific configuration 5 Command line arguments (--tls-no-verify
, --change-ps1=false
, etc.) Configuration via command line arguments Note
The highest priority wins. If a configuration file is found in a higher priority location, the values from the configuration read from lower priority locations are overwritten.
Note
To find the locations where pixi
looks for configuration files, run pixi
with -vv
.
In versions of pixi 0.20.1
and older the global configuration used snake_case we've changed to kebab-case
for consistency with the rest of the configuration. But we still support the old snake_case
configuration, for older configuration options. These are:
default_channels
change_ps1
tls_no_verify
authentication_override_file
mirrors
and sub-optionsrepodata-config
and sub-optionsThe following reference describes all available configuration options.
"},{"location":"reference/pixi_configuration/#default-channels","title":"default-channels
","text":"The default channels to select when running pixi init
or pixi global install
. This defaults to only conda-forge. config.toml
default-channels = [\"conda-forge\"]\n
Note
The default-channels
are only used when initializing a new project. Once initialized the channels
are used from the project manifest.
change-ps1
","text":"When set to false, the (pixi)
prefix in the shell prompt is removed. This applies to the pixi shell
subcommand. You can override this from the CLI with --change-ps1
.
change-ps1 = true\n
"},{"location":"reference/pixi_configuration/#tls-no-verify","title":"tls-no-verify
","text":"When set to true, the TLS certificates are not verified.
Warning
This is a security risk and should only be used for testing purposes or internal networks.
You can override this from the CLI with --tls-no-verify
.
tls-no-verify = false\n
"},{"location":"reference/pixi_configuration/#authentication-override-file","title":"authentication-override-file
","text":"Override from where the authentication information is loaded. Usually, we try to use the keyring to load authentication data from, and only use a JSON file as a fallback. This option allows you to force the use of a JSON file. Read more in the authentication section. config.toml
authentication-override-file = \"/path/to/your/override.json\"\n
"},{"location":"reference/pixi_configuration/#detached-environments","title":"detached-environments
","text":"The directory where pixi stores the project environments, what would normally be placed in the .pixi/envs
folder in a project's root. It doesn't affect the environments built for pixi global
. The location of environments created for a pixi global
installation can be controlled using the PIXI_HOME
environment variable.
Warning
We recommend against using this because any environment created for a project is no longer placed in the same folder as the project. This creates a disconnect between the project and its environments and manual cleanup of the environments is required when deleting the project.
However, in some cases, this option can still be very useful, for instance to:
This field can consist of two types of input.
true
or false
, which will enable or disable the feature respectively. (not \"true\"
or \"false\"
, this is read as false
)config.toml
detached-environments = true\n
or: config.tomldetached-environments = \"/opt/pixi/envs\"\n
The environments will be stored in the cache directory when this option is true
. When you specify a custom path the environments will be stored in that directory.
The resulting directory structure will look like this: config.toml
detached-environments = \"/opt/pixi/envs\"\n
/opt/pixi/envs\n\u251c\u2500\u2500 pixi-6837172896226367631\n\u2502 \u2514\u2500\u2500 envs\n\u2514\u2500\u2500 NAME_OF_PROJECT-HASH_OF_ORIGINAL_PATH\n \u251c\u2500\u2500 envs # the runnable environments\n \u2514\u2500\u2500 solve-group-envs # If there are solve groups\n
"},{"location":"reference/pixi_configuration/#pinning-strategy","title":"pinning-strategy
","text":"The strategy to use for pinning dependencies when running pixi add
. The default is semver
but you can set the following:
no-pin
: No pinning, resulting in an unconstraint dependency. *
semver
: Pinning to the latest version that satisfies the semver constraint. Resulting in a pin to major for most versions and to minor for v0
versions.exact-version
: Pinning to the exact version, 1.2.3
-> ==1.2.3
.major
: Pinning to the major version, 1.2.3
-> >=1.2.3, <2
.minor
: Pinning to the minor version, 1.2.3
-> >=1.2.3, <1.3
.latest-up
: Pinning to the latest version, 1.2.3
-> >=1.2.3
.pinning-strategy = \"no-pin\"\n
"},{"location":"reference/pixi_configuration/#mirrors","title":"mirrors
","text":"Configuration for conda channel-mirrors, more info below.
config.toml[mirrors]\n# redirect all requests for conda-forge to the prefix.dev mirror\n\"https://conda.anaconda.org/conda-forge\" = [\n \"https://prefix.dev/conda-forge\"\n]\n\n# redirect all requests for bioconda to one of the three listed mirrors\n# Note: for repodata we try the first mirror first.\n\"https://conda.anaconda.org/bioconda\" = [\n \"https://conda.anaconda.org/bioconda\",\n # OCI registries are also supported\n \"oci://ghcr.io/channel-mirrors/bioconda\",\n \"https://prefix.dev/bioconda\",\n]\n
"},{"location":"reference/pixi_configuration/#repodata-config","title":"repodata-config
","text":"Configuration for repodata fetching. config.toml
[repodata-config]\n# disable fetching of jlap, bz2 or zstd repodata files.\n# This should only be used for specific old versions of artifactory and other non-compliant\n# servers.\ndisable-jlap = true # don't try to download repodata.jlap\ndisable-bzip2 = true # don't try to download repodata.json.bz2\ndisable-zstd = true # don't try to download repodata.json.zst\n
"},{"location":"reference/pixi_configuration/#pypi-config","title":"pypi-config
","text":"To setup a certain number of defaults for the usage of PyPI registries. You can use the following configuration options:
index-url
: The default index URL to use for PyPI packages. This will be added to a manifest file on a pixi init
.extra-index-urls
: A list of additional URLs to use for PyPI packages. This will be added to a manifest file on a pixi init
.keyring-provider
: Allows the use of the keyring python package to store and retrieve credentials.[pypi-config]\n# Main index url\nindex-url = \"https://pypi.org/simple\"\n# list of additional urls\nextra-index-urls = [\"https://pypi.org/simple2\"]\n# can be \"subprocess\" or \"disabled\"\nkeyring-provider = \"subprocess\"\n
index-url
and extra-index-urls
are not globals
Unlike pip, these settings, with the exception of keyring-provider
will only modify the pixi.toml
/pyproject.toml
file and are not globally interpreted when not present in the manifest. This is because we want to keep the manifest file as complete and reproducible as possible.
You can configure mirrors for conda channels. We expect that mirrors are exact copies of the original channel. The implementation will look for the mirror key (a URL) in the mirrors
section of the configuration file and replace the original URL with the mirror URL.
To also include the original URL, you have to repeat it in the list of mirrors.
The mirrors are prioritized based on the order of the list. We attempt to fetch the repodata (the most important file) from the first mirror in the list. The repodata contains all the SHA256 hashes of the individual packages, so it is important to get this file from a trusted source.
You can also specify mirrors for an entire \"host\", e.g.
config.toml[mirrors]\n\"https://conda.anaconda.org\" = [\n \"https://prefix.dev/\"\n]\n
This will forward all request to channels on anaconda.org to prefix.dev. Channels that are not currently mirrored on prefix.dev will fail in the above example.
"},{"location":"reference/pixi_configuration/#oci-mirrors","title":"OCI Mirrors","text":"You can also specify mirrors on the OCI registry. There is a public mirror on the Github container registry (ghcr.io) that is maintained by the conda-forge team. You can use it like this:
config.toml[mirrors]\n\"https://conda.anaconda.org/conda-forge\" = [\n \"oci://ghcr.io/channel-mirrors/conda-forge\"\n]\n
The GHCR mirror also contains bioconda
packages. You can search the available packages on Github.
The pixi.toml
is the pixi project configuration file, also known as the project manifest.
A toml
file is structured in different tables. This document will explain the usage of the different tables. For more technical documentation check pixi on crates.io.
Tip
We also support the pyproject.toml
file. It has the same structure as the pixi.toml
file. except that you need to prepend the tables with tool.pixi
instead of just the table name. For example, the [project]
table becomes [tool.pixi.project]
. There are also some small extras that are available in the pyproject.toml
file, checkout the pyproject.toml documentation for more information.
project
table","text":"The minimally required information in the project
table is:
[project]\nchannels = [\"conda-forge\"]\nname = \"project-name\"\nplatforms = [\"linux-64\"]\n
"},{"location":"reference/project_configuration/#name","title":"name
","text":"The name of the project.
name = \"project-name\"\n
"},{"location":"reference/project_configuration/#channels","title":"channels
","text":"This is a list that defines the channels used to fetch the packages from. If you want to use channels hosted on anaconda.org
you only need to use the name of the channel directly.
channels = [\"conda-forge\", \"robostack\", \"bioconda\", \"nvidia\", \"pytorch\"]\n
Channels situated on the file system are also supported with absolute file paths:
channels = [\"conda-forge\", \"file:///home/user/staged-recipes/build_artifacts\"]\n
To access private or public channels on prefix.dev or Quetz use the url including the hostname:
channels = [\"conda-forge\", \"https://repo.prefix.dev/channel-name\"]\n
"},{"location":"reference/project_configuration/#platforms","title":"platforms
","text":"Defines the list of platforms that the project supports. Pixi solves the dependencies for all these platforms and puts them in the lock file (pixi.lock
).
platforms = [\"win-64\", \"linux-64\", \"osx-64\", \"osx-arm64\"]\n
The available platforms are listed here: link
Special macOS behavior
macOS has two platforms: osx-64
for Intel Macs and osx-arm64
for Apple Silicon Macs. To support both, include both in your platforms list. Fallback: If osx-arm64
can't resolve, use osx-64
. Running osx-64
on Apple Silicon uses Rosetta for Intel binaries.
version
(optional)","text":"The version of the project. This should be a valid version based on the conda Version Spec. See the version documentation, for an explanation of what is allowed in a Version Spec.
version = \"1.2.3\"\n
"},{"location":"reference/project_configuration/#authors-optional","title":"authors
(optional)","text":"This is a list of authors of the project.
authors = [\"John Doe <j.doe@prefix.dev>\", \"Marie Curie <mss1867@gmail.com>\"]\n
"},{"location":"reference/project_configuration/#description-optional","title":"description
(optional)","text":"This should contain a short description of the project.
description = \"A simple description\"\n
"},{"location":"reference/project_configuration/#license-optional","title":"license
(optional)","text":"The license as a valid SPDX string (e.g. MIT AND Apache-2.0)
license = \"MIT\"\n
"},{"location":"reference/project_configuration/#license-file-optional","title":"license-file
(optional)","text":"Relative path to the license file.
license-file = \"LICENSE.md\"\n
"},{"location":"reference/project_configuration/#readme-optional","title":"readme
(optional)","text":"Relative path to the README file.
readme = \"README.md\"\n
"},{"location":"reference/project_configuration/#homepage-optional","title":"homepage
(optional)","text":"URL of the project homepage.
homepage = \"https://pixi.sh\"\n
"},{"location":"reference/project_configuration/#repository-optional","title":"repository
(optional)","text":"URL of the project source repository.
repository = \"https://github.com/prefix-dev/pixi\"\n
"},{"location":"reference/project_configuration/#documentation-optional","title":"documentation
(optional)","text":"URL of the project documentation.
documentation = \"https://pixi.sh\"\n
"},{"location":"reference/project_configuration/#conda-pypi-map-optional","title":"conda-pypi-map
(optional)","text":"Mapping of channel name or URL to location of mapping that can be URL/Path. Mapping should be structured in json
format where conda_name
: pypi_package_name
. Example:
{\n \"jupyter-ros\": \"my-name-from-mapping\",\n \"boltons\": \"boltons-pypi\"\n}\n
If conda-forge
is not present in conda-pypi-map
pixi
will use prefix.dev
mapping for it.
conda-pypi-map = { \"conda-forge\" = \"https://example.com/mapping\", \"https://repo.prefix.dev/robostack\" = \"local/robostack_mapping.json\"}\n
"},{"location":"reference/project_configuration/#channel-priority-optional","title":"channel-priority
(optional)","text":"This is the setting for the priority of the channels in the solver step.
Options:
strict
: Default, The channels are used in the order they are defined in the channels
list. Only packages from the first channel that has the package are used. This ensures that different variants for a single package are not mixed from different channels. Using packages from different incompatible channels like conda-forge
and main
can lead to hard to debug ABI incompatibilities.
We strongly recommend not to switch the default. - disabled
: There is no priority, all package variants from all channels will be set per package name and solved as one. Care should be taken when using this option. Since package variants can come from any channel when you use this mode, packages might not be compatible. This can cause hard to debug ABI incompatibilities.
We strongly discourage using this option.
channel-priority = \"disabled\"\n
channel-priority = \"disabled\"
is a security risk
Disabling channel priority may lead to unpredictable dependency resolutions. This is a possible security risk as it may lead to packages being installed from unexpected channels. It's advisable to maintain the default strict setting and order channels thoughtfully. If necessary, specify a channel directly for a dependency.
[project]\n# Putting conda-forge first solves most issues\nchannels = [\"conda-forge\", \"channel-name\"]\n[dependencies]\npackage = {version = \"*\", channel = \"channel-name\"}\n
"},{"location":"reference/project_configuration/#the-tasks-table","title":"The tasks
table","text":"Tasks are a way to automate certain custom commands in your project. For example, a lint
or format
step. Tasks in a pixi project are essentially cross-platform shell commands, with a unified syntax across platforms. For more in-depth information, check the Advanced tasks documentation. Pixi's tasks are run in a pixi environment using pixi run
and are executed using the deno_task_shell
.
[tasks]\nsimple = \"echo This is a simple task\"\ncmd = { cmd=\"echo Same as a simple task but now more verbose\"}\ndepending = { cmd=\"echo run after simple\", depends-on=\"simple\"}\nalias = { depends-on=[\"depending\"]}\ndownload = { cmd=\"curl -o file.txt https://example.com/file.txt\" , outputs=[\"file.txt\"]}\nbuild = { cmd=\"npm build\", cwd=\"frontend\", inputs=[\"frontend/package.json\", \"frontend/*.js\"]}\nrun = { cmd=\"python run.py $ARGUMENT\", env={ ARGUMENT=\"value\" }}\nformat = { cmd=\"black $INIT_CWD\" } # runs black where you run pixi run format\nclean-env = { cmd = \"python isolated.py\", clean-env = true} # Only on Unix!\n
You can modify this table using pixi task
.
Note
Specify different tasks for different platforms using the target table
Info
If you want to hide a task from showing up with pixi task list
or pixi info
, you can prefix the name with _
. For example, if you want to hide depending
, you can rename it to _depending
.
system-requirements
table","text":"The system requirements are used to define minimal system specifications used during dependency resolution.
For example, we can define a unix system with a specific minimal libc version.
[system-requirements]\nlibc = \"2.28\"\n
or make the project depend on a specific version of cuda
: [system-requirements]\ncuda = \"12\"\n
The options are:
linux
: The minimal version of the linux kernel.libc
: The minimal version of the libc library. Also allows specifying the family of the libc library. e.g. libc = { family=\"glibc\", version=\"2.28\" }
macos
: The minimal version of the macOS operating system.cuda
: The minimal version of the CUDA library.More information in the system requirements documentation.
"},{"location":"reference/project_configuration/#the-pypi-options-table","title":"Thepypi-options
table","text":"The pypi-options
table is used to define options that are specific to PyPI registries. These options can be specified either at the root level, which will add it to the default options feature, or on feature level, which will create a union of these options when the features are included in the environment.
The options that can be defined are:
index-url
: replaces the main index url.extra-index-urls
: adds an extra index url.find-links
: similar to --find-links
option in pip
.no-build-isolation
: disables build isolation, can only be set per package.index-strategy
: allows for specifying the index strategy to use.These options are explained in the sections below. Most of these options are taken directly or with slight modifications from the uv settings. If any are missing that you need feel free to create an issue requesting them.
"},{"location":"reference/project_configuration/#alternative-registries","title":"Alternative registries","text":"Strict Index Priority
Unlike pip, because we make use of uv, we have a strict index priority. This means that the first index is used where a package can be found. The order is determined by the order in the toml file. Where the extra-index-urls
are preferred over the index-url
. Read more about this on the uv docs
Often you might want to use an alternative or extra index for your project. This can be done by adding the pypi-options
table to your pixi.toml
file, the following options are available:
index-url
: replaces the main index url. If this is not set the default index used is https://pypi.org/simple
. Only one index-url
can be defined per environment.extra-index-urls
: adds an extra index url. The urls are used in the order they are defined. And are preferred over the index-url
. These are merged across features into an environment.find-links
: which can either be a path {path = './links'}
or a url {url = 'https://example.com/links'}
. This is similar to the --find-links
option in pip
. These are merged across features into an environment.An example:
[pypi-options]\nindex-url = \"https://pypi.org/simple\"\nextra-index-urls = [\"https://example.com/simple\"]\nfind-links = [{path = './links'}]\n
There are some examples in the pixi repository, that make use of this feature.
Authentication Methods
To read about existing authentication methods for private registries, please check the PyPI Authentication section.
"},{"location":"reference/project_configuration/#no-build-isolation","title":"No Build Isolation","text":"Even though build isolation is a good default. One can choose to not isolate the build for a certain package name, this allows the build to access the pixi
environment. This is convenient if you want to use torch
or something similar for your build-process.
[dependencies]\npytorch = \"2.4.0\"\n\n[pypi-options]\nno-build-isolation = [\"detectron2\"]\n\n[pypi-dependencies]\ndetectron2 = { git = \"https://github.com/facebookresearch/detectron2.git\", rev = \"5b72c27ae39f99db75d43f18fd1312e1ea934e60\"}\n
Conda dependencies define the build environment
To use no-build-isolation
effectively, use conda dependencies to define the build environment. These are installed before the PyPI dependencies are resolved, this way these dependencies are available during the build process. In the example above adding torch
as a PyPI dependency would be ineffective, as it would not yet be installed during the PyPI resolution phase.
The strategy to use when resolving against multiple index URLs. Description modified from the uv documentation:
By default, uv
and thus pixi
, will stop at the first index on which a given package is available, and limit resolutions to those present on that first index (first-match). This prevents dependency confusion attacks, whereby an attack can upload a malicious package under the same name to a secondary index.
One index strategy per environment
Only one index-strategy
can be defined per environment or solve-group, otherwise, an error will be shown.
a
is available on index x
and y
, it will prefer the version from x
unless you've requested a package version that is only available on y
.x
and y
that both contain package a
, it will take the best version from either x
or y
, but should that version be available on both indexes it will prefer x
.PyPI only
The index-strategy
only changes PyPI package resolution and not conda package resolution.
dependencies
table(s)","text":"This section defines what dependencies you would like to use for your project.
There are multiple dependencies tables. The default is [dependencies]
, which are dependencies that are shared across platforms.
Dependencies are defined using a VersionSpec. A VersionSpec
combines a Version with an optional operator.
Some examples are:
# Use this exact package version\npackage0 = \"1.2.3\"\n# Use 1.2.3 up to 1.3.0\npackage1 = \"~=1.2.3\"\n# Use larger than 1.2 lower and equal to 1.4\npackage2 = \">1.2,<=1.4\"\n# Bigger or equal than 1.2.3 or lower not including 1.0.0\npackage3 = \">=1.2.3|<1.0.0\"\n
Dependencies can also be defined as a mapping where it is using a matchspec:
package0 = { version = \">=1.2.3\", channel=\"conda-forge\" }\npackage1 = { version = \">=1.2.3\", build=\"py34_0\" }\n
Tip
The dependencies can be easily added using the pixi add
command line. Running add
for an existing dependency will replace it with the newest it can use.
Note
To specify different dependencies for different platforms use the target table
"},{"location":"reference/project_configuration/#dependencies","title":"dependencies
","text":"Add any conda package dependency that you want to install into the environment. Don't forget to add the channel to the project table should you use anything different than conda-forge
. Even if the dependency defines a channel that channel should be added to the project.channels
list.
[dependencies]\npython = \">3.9,<=3.11\"\nrust = \"1.72\"\npytorch-cpu = { version = \"~=1.1\", channel = \"pytorch\" }\n
"},{"location":"reference/project_configuration/#pypi-dependencies","title":"pypi-dependencies
","text":"Details regarding the PyPI integration We use uv
, which is a new fast pip replacement written in Rust.
We integrate uv as a library, so we use the uv resolver, to which we pass the conda packages as 'locked'. This disallows uv from installing these dependencies itself, and ensures it uses the exact version of these packages in the resolution. This is unique amongst conda based package managers, which usually just call pip from a subprocess.
The uv resolution is included in the lock file directly.
Pixi directly supports depending on PyPI packages, the PyPA calls a distributed package a 'distribution'. There are Source and Binary distributions both of which are supported by pixi. These distributions are installed into the environment after the conda environment has been resolved and installed. PyPI packages are not indexed on prefix.dev but can be viewed on pypi.org.
Important considerations
dependencies
table where possible.These dependencies don't follow the conda matchspec specification. The version
is a string specification of the version according to PEP404/PyPA. Additionally, a list of extra's can be included, which are essentially optional dependencies. Note that this version
is distinct from the conda MatchSpec type. See the example below to see how this is used in practice:
[dependencies]\n# When using pypi-dependencies, python is needed to resolve pypi dependencies\n# make sure to include this\npython = \">=3.6\"\n\n[pypi-dependencies]\nfastapi = \"*\" # This means any version (the wildcard `*` is a pixi addition, not part of the specification)\npre-commit = \"~=3.5.0\" # This is a single version specifier\n# Using the toml map allows the user to add `extras`\npandas = { version = \">=1.0.0\", extras = [\"dataframe\", \"sql\"]}\n\n# git dependencies\n# With ssh\nflask = { git = \"ssh://git@github.com/pallets/flask\" }\n# With https and a specific revision\nrequests = { git = \"https://github.com/psf/requests.git\", rev = \"0106aced5faa299e6ede89d1230bd6784f2c3660\" }\n# TODO: will support later -> branch = '' or tag = '' to specify a branch or tag\n\n# You can also directly add a source dependency from a path, tip keep this relative to the root of the project.\nminimal-project = { path = \"./minimal-project\", editable = true}\n\n# You can also use a direct url, to either a `.tar.gz` or `.zip`, or a `.whl` file\nclick = { url = \"https://github.com/pallets/click/releases/download/8.1.7/click-8.1.7-py3-none-any.whl\" }\n\n# You can also just the default git repo, it will checkout the default branch\npytest = { git = \"https://github.com/pytest-dev/pytest.git\"}\n
"},{"location":"reference/project_configuration/#full-specification","title":"Full specification","text":"The full specification of a PyPI dependencies that pixi supports can be split into the following fields:
"},{"location":"reference/project_configuration/#extras","title":"extras
","text":"A list of extras to install with the package. e.g. [\"dataframe\", \"sql\"]
The extras field works with all other version specifiers as it is an addition to the version specifier.
pandas = { version = \">=1.0.0\", extras = [\"dataframe\", \"sql\"]}\npytest = { git = \"URL\", extras = [\"dev\"]}\nblack = { url = \"URL\", extras = [\"cli\"]}\nminimal-project = { path = \"./minimal-project\", editable = true, extras = [\"dev\"]}\n
"},{"location":"reference/project_configuration/#version","title":"version
","text":"The version of the package to install. e.g. \">=1.0.0\"
or *
which stands for any version, this is pixi specific. Version is our default field so using no inline table ({}
) will default to this field.
py-rattler = \"*\"\nruff = \"~=1.0.0\"\npytest = {version = \"*\", extras = [\"dev\"]}\n
"},{"location":"reference/project_configuration/#git","title":"git
","text":"A git repository to install from. This support both https:// and ssh:// urls.
Use git
in combination with rev
or subdirectory
:
rev
: A specific revision to install. e.g. rev = \"0106aced5faa299e6ede89d1230bd6784f2c3660
subdirectory
: A subdirectory to install from. subdirectory = \"src\"
or subdirectory = \"src/packagex\"
# Note don't forget the `ssh://` or `https://` prefix!\npytest = { git = \"https://github.com/pytest-dev/pytest.git\"}\nrequests = { git = \"https://github.com/psf/requests.git\", rev = \"0106aced5faa299e6ede89d1230bd6784f2c3660\" }\npy-rattler = { git = \"ssh://git@github.com/mamba-org/rattler.git\", subdirectory = \"py-rattler\" }\n
"},{"location":"reference/project_configuration/#path","title":"path
","text":"A local path to install from. e.g. path = \"./path/to/package\"
We would advise to keep your path projects in the project, and to use a relative path.
Set editable
to true
to install in editable mode, this is highly recommended as it is hard to reinstall if you're not using editable mode. e.g. editable = true
minimal-project = { path = \"./minimal-project\", editable = true}\n
"},{"location":"reference/project_configuration/#url","title":"url
","text":"A URL to install a wheel or sdist directly from an url.
pandas = {url = \"https://files.pythonhosted.org/packages/3d/59/2afa81b9fb300c90531803c0fd43ff4548074fa3e8d0f747ef63b3b5e77a/pandas-2.2.1.tar.gz\"}\n
Did you know you can use: add --pypi
? Use the --pypi
flag with the add
command to quickly add PyPI packages from the CLI. E.g pixi add --pypi flask
This does not support all the features of the pypi-dependencies
table yet.
sdist
)","text":"The Source Distribution Format is a source based format (sdist for short), that a package can include alongside the binary wheel format. Because these distributions need to be built, the need a python executable to do this. This is why python needs to be present in a conda environment. Sdists usually depend on system packages to be built, especially when compiling C/C++ based python bindings. Think for example of Python SDL2 bindings depending on the C library: SDL2. To help built these dependencies we activate the conda environment that includes these pypi dependencies before resolving. This way when a source distribution depends on gcc
for example, it's used from the conda environment instead of the system.
host-dependencies
","text":"This table contains dependencies that are needed to build your project but which should not be included when your project is installed as part of another project. In other words, these dependencies are available during the build but are no longer available when your project is installed. Dependencies listed in this table are installed for the architecture of the target machine.
[host-dependencies]\npython = \"~=3.10.3\"\n
Typical examples of host dependencies are:
python
here and an R package would list mro-base
or r-base
.openssl
, rapidjson
, or xtensor
.build-dependencies
","text":"This table contains dependencies that are needed to build the project. Different from dependencies
and host-dependencies
these packages are installed for the architecture of the build machine. This enables cross-compiling from one machine architecture to another.
[build-dependencies]\ncmake = \"~=3.24\"\n
Typical examples of build dependencies are:
cmake
is invoked on the build machine to generate additional code- or project-files which are then include in the compilation process.Info
The build target refers to the machine that will execute the build. Programs and libraries installed by these dependencies will be executed on the build machine.
For example, if you compile on a MacBook with an Apple Silicon chip but target Linux x86_64 then your build platform is osx-arm64
and your host platform is linux-64
.
activation
table","text":"The activation table is used for specialized activation operations that need to be run when the environment is activated.
There are two types of activation operations a user can modify in the manifest:
scripts
: A list of scripts that are run when the environment is activated.env
: A mapping of environment variables that are set when the environment is activated.These activation operations will be run before the pixi run
and pixi shell
commands.
Note
The activation operations are run by the system shell interpreter as they run before an environment is available. This means that it runs as cmd.exe
on windows and bash
on linux and osx (Unix). Only .sh
, .bash
and .bat
files are supported.
And the environment variables are set in the shell that is running the activation script, thus take note when using e.g. $
or %
.
If you have scripts or env variable per platform use the target table.
[activation]\nscripts = [\"env_setup.sh\"]\nenv = { ENV_VAR = \"value\" }\n\n# To support windows platforms as well add the following\n[target.win-64.activation]\nscripts = [\"env_setup.bat\"]\n\n[target.linux-64.activation.env]\nENV_VAR = \"linux-value\"\n\n# You can also reference existing environment variables, but this has\n# to be done separately for unix-like operating systems and Windows\n[target.unix.activation.env]\nENV_VAR = \"$OTHER_ENV_VAR/unix-value\"\n\n[target.win.activation.env]\nENV_VAR = \"%OTHER_ENV_VAR%\\\\windows-value\"\n
"},{"location":"reference/project_configuration/#the-target-table","title":"The target
table","text":"The target table is a table that allows for platform specific configuration. Allowing you to make different sets of tasks or dependencies per platform.
The target table is currently implemented for the following sub-tables:
activation
dependencies
tasks
The target table is defined using [target.PLATFORM.SUB-TABLE]
. E.g [target.linux-64.dependencies]
The platform can be any of:
win
, osx
, linux
or unix
(unix
matches linux
and osx
)linux-64
, osx-arm64
The sub-table can be any of the specified above.
To make it a bit more clear, let's look at an example below. Currently, pixi combines the top level tables like dependencies
with the target-specific ones into a single set. Which, in the case of dependencies, can both add or overwrite dependencies. In the example below, we have cmake
being used for all targets but on osx-64
or osx-arm64
a different version of python will be selected.
[dependencies]\ncmake = \"3.26.4\"\npython = \"3.10\"\n\n[target.osx.dependencies]\npython = \"3.11\"\n
Here are some more examples:
[target.win-64.activation]\nscripts = [\"setup.bat\"]\n\n[target.win-64.dependencies]\nmsmpi = \"~=10.1.1\"\n\n[target.win-64.build-dependencies]\nvs2022_win-64 = \"19.36.32532\"\n\n[target.win-64.tasks]\ntmp = \"echo $TEMP\"\n\n[target.osx-64.dependencies]\nclang = \">=16.0.6\"\n
"},{"location":"reference/project_configuration/#the-feature-and-environments-tables","title":"The feature
and environments
tables","text":"The feature
table allows you to define features that can be used to create different [environments]
. The [environments]
table allows you to define different environments. The design is explained in the this design document.
[feature.test.dependencies]\npytest = \"*\"\n\n[environments]\ntest = [\"test\"]\n
This will create an environment called test
that has pytest
installed.
feature
table","text":"The feature
table allows you to define the following fields per feature.
dependencies
: Same as the dependencies.pypi-dependencies
: Same as the pypi-dependencies.pypi-options
: Same as the pypi-options.system-requirements
: Same as the system-requirements.activation
: Same as the activation.platforms
: Same as the platforms. Unless overridden, the platforms
of the feature will be those defined at project level.channels
: Same as the channels. Unless overridden, the channels
of the feature will be those defined at project level.channel-priority
: Same as the channel-priority.target
: Same as the target.tasks
: Same as the tasks.These tables are all also available without the feature
prefix. When those are used we call them the default
feature. This is a protected name you can not use for your own feature.
[feature.cuda]\nactivation = {scripts = [\"cuda_activation.sh\"]}\n# Results in: [\"nvidia\", \"conda-forge\"] when the default is `conda-forge`\nchannels = [\"nvidia\"]\ndependencies = {cuda = \"x.y.z\", cudnn = \"12.0\"}\npypi-dependencies = {torch = \"==1.9.0\"}\nplatforms = [\"linux-64\", \"osx-arm64\"]\nsystem-requirements = {cuda = \"12\"}\ntasks = { warmup = \"python warmup.py\" }\ntarget.osx-arm64 = {dependencies = {mlx = \"x.y.z\"}}\n
Cuda feature table example but written as separate tables[feature.cuda.activation]\nscripts = [\"cuda_activation.sh\"]\n\n[feature.cuda.dependencies]\ncuda = \"x.y.z\"\ncudnn = \"12.0\"\n\n[feature.cuda.pypi-dependencies]\ntorch = \"==1.9.0\"\n\n[feature.cuda.system-requirements]\ncuda = \"12\"\n\n[feature.cuda.tasks]\nwarmup = \"python warmup.py\"\n\n[feature.cuda.target.osx-arm64.dependencies]\nmlx = \"x.y.z\"\n\n# Channels and Platforms are not available as separate tables as they are implemented as lists\n[feature.cuda]\nchannels = [\"nvidia\"]\nplatforms = [\"linux-64\", \"osx-arm64\"]\n
"},{"location":"reference/project_configuration/#the-environments-table","title":"The environments
table","text":"The [environments]
table allows you to define environments that are created using the features defined in the [feature]
tables.
The environments table is defined using the following fields:
features
: The features that are included in the environment. Unless no-default-feature
is set to true
, the default feature is implicitly included in the environment.solve-group
: The solve group is used to group environments together at the solve stage. This is useful for environments that need to have the same dependencies but might extend them with additional dependencies. For instance when testing a production environment with additional test dependencies. These dependencies will then be the same version in all environments that have the same solve group. But the different environments contain different subsets of the solve-groups dependencies set.no-default-feature
: Whether to include the default feature in that environment. The default is false
, to include the default feature.Full environments table specification
[environments]\ntest = {features = [\"test\"], solve-group = \"test\"}\nprod = {features = [\"prod\"], solve-group = \"test\"}\nlint = {features = [\"lint\"], no-default-feature = true}\n
As shown in the example above, in the simplest of cases, it is possible to define an environment only by listing its features: Simplest example[environments]\ntest = [\"test\"]\n
is equivalent to
Simplest example expanded[environments]\ntest = {features = [\"test\"]}\n
When an environment comprises several features (including the default feature): - The activation
and tasks
of the environment are the union of the activation
and tasks
of all its features. - The dependencies
and pypi-dependencies
of the environment are the union of the dependencies
and pypi-dependencies
of all its features. This means that if several features define a requirement for the same package, both requirements will be combined. Beware of conflicting requirements across features added to the same environment. - The system-requirements
of the environment is the union of the system-requirements
of all its features. If multiple features specify a requirement for the same system package, the highest version is chosen. - The channels
of the environment is the union of the channels
of all its features. Channel priorities can be specified in each feature, to ensure channels are considered in the right order in the environment. - The platforms
of the environment is the intersection of the platforms
of all its features. Be aware that the platforms supported by a feature (including the default feature) will be considered as the platforms
defined at project level (unless overridden in the feature). This means that it is usually a good idea to set the project platforms
to all platforms it can support across its environments.
The global configuration options are documented in the global configuration section.
"},{"location":"switching_from/conda/","title":"Transitioning from theconda
or mamba
to pixi
","text":"Welcome to the guide designed to ease your transition from conda
or mamba
to pixi
. This document compares key commands and concepts between these tools, highlighting pixi
's unique approach to managing environments and packages. With pixi
, you'll experience a project-based workflow, enhancing your development process, and allowing for easy sharing of your work.
Pixi
builds upon the foundation of the conda ecosystem, introducing a project-centric approach rather than focusing solely on environments. This shift towards projects offers a more organized and efficient way to manage dependencies and run code, tailored to modern development practices.
conda create -n myenv -c conda-forge python=3.8
pixi init myenv
followed by pixi add python=3.8
Activating an Environment conda activate myenv
pixi shell
within the project directory Deactivating an Environment conda deactivate
exit
from the pixi shell
Running a Task conda run -n myenv python my_program.py
pixi run python my_program.py
(See run) Installing a Package conda install numpy
pixi add numpy
Uninstalling a Package conda remove numpy
pixi remove numpy
No base
environment
Conda has a base environment, which is the default environment when you start a new shell. Pixi does not have a base environment. And requires you to install the tools you need in the project or globally. Using pixi global install bat
will install bat
in a global environment, which is not the same as the base
environment in conda.
For some advanced use-cases, you can activate the environment in the current shell. This uses the pixi shell-hook
which prints the activation script, which can be used to activate the environment in the current shell without pixi
itself.
~/myenv > eval \"$(pixi shell-hook)\"\n
"},{"location":"switching_from/conda/#environment-vs-project","title":"Environment vs Project","text":"Conda
and mamba
focus on managing environments, while pixi
emphasizes projects. In pixi
, a project is a folder containing a manifest(pixi.toml
/pyproject.toml
) file that describes the project, a pixi.lock
lock-file that describes the exact dependencies, and a .pixi
folder that contains the environment.
This project-centric approach allows for easy sharing and collaboration, as the project folder contains all the necessary information to recreate the environment. It manages more than one environment for more than one platform in a single project, and allows for easy switching between them. (See multiple environments)
"},{"location":"switching_from/conda/#global-environments","title":"Global environments","text":"conda
installs all environments in one global location. When this is important to you for filesystem reasons, you can use the detached-environments feature of pixi.
pixi config set detached-environments true\n# or a specific location\npixi config set detached-environments /path/to/envs\n
This doesn't allow you to activate the environments using pixi shell -n
but it will make the installation of the environments go to the same folder. pixi
does have the pixi global
command to install tools on your machine. (See global) This is not a replacement for conda
but works the same as pipx
and condax
. It creates a single isolated environment for the given requirement and installs the binaries into the global path.
pixi global install bat\nbat pixi.toml\n
Never install pip
with pixi global
Installations with pixi global
get their own isolated environment. Installing pip
with pixi global
will create a new isolated environment with its own pip
binary. Using that pip
binary will install packages in the pip
environment, making it unreachable form anywhere as you can't activate it.
With pixi
you can import environment.yml
files into a pixi project. (See import)
pixi init --import environment.yml\n
This will create a new project with the dependencies from the environment.yml
file. Exporting your environment If you are working with Conda users or systems, you can export your environment to a environment.yml
file to share them.
pixi project export conda\n
Additionally you can export a conda explicit specification."},{"location":"switching_from/conda/#troubleshooting","title":"Troubleshooting","text":"Encountering issues? Here are solutions to some common problems when being used to the conda
workflow:
is excluded because due to strict channel priority not using this option from: 'https://conda.anaconda.org/conda-forge/'
This error occurs when the package is in multiple channels. pixi
uses a strict channel priority. See channel priority for more information.pixi global install pip
, pip doesn't work. pip
is installed in the global isolated environment. Use pixi add pip
in a project to install pip
in the project environment and use that project.pixi global install <Any Library>
-> import <Any Library>
-> ModuleNotFoundError: No module named '<Any Library>'
The library is installed in the global isolated environment. Use pixi add <Any Library>
in a project to install the library in the project environment and use that project.poetry
to pixi
","text":"Welcome to the guide designed to ease your transition from poetry
to pixi
. This document compares key commands and concepts between these tools, highlighting pixi
's unique approach to managing environments and packages. With pixi
, you'll experience a project-based workflow similar to poetry
while including the conda
ecosystem and allowing for easy sharing of your work.
Poetry is most-likely the closest tool to pixi in terms of project management, in the python ecosystem. On top of the PyPI ecosystem, pixi
adds the power of the conda ecosystem, allowing for a more flexible and powerful environment management.
poetry new myenv
pixi init myenv
Running a Task poetry run which python
pixi run which python
pixi
uses a built-in cross platform shell for run where poetry uses your shell. Installing a Package poetry add numpy
pixi add numpy
adds the conda variant. pixi add --pypi numpy
adds the PyPI variant. Uninstalling a Package poetry remove numpy
pixi remove numpy
removes the conda variant. pixi remove --pypi numpy
removes the PyPI variant. Building a package poetry build
We've yet to implement package building and publishing Publishing a package poetry publish
We've yet to implement package building and publishing Reading the pyproject.toml [tool.poetry]
[tool.pixi]
Defining dependencies [tool.poetry.dependencies]
[tool.pixi.dependencies]
for conda, [tool.pixi.pypi-dependencies]
or [project.dependencies]
for PyPI dependencies Dependency definition - numpy = \"^1.2.3\"
- numpy = \"~1.2.3\"
- numpy = \"*\"
- numpy = \">=1.2.3 <2.0.0\"
- numpy = \">=1.2.3 <1.3.0\"
- numpy = \"*\"
Lock file poetry.lock
pixi.lock
Environment directory ~/.cache/pypoetry/virtualenvs/myenv
./.pixi
Defaults to the project folder, move this using the detached-environments
"},{"location":"switching_from/poetry/#support-both-poetry-and-pixi-in-my-project","title":"Support both poetry
and pixi
in my project","text":"You can allow users to use poetry
and pixi
in the same project, they will not touch each other's parts of the configuration or system. It's best to duplicate the dependencies, basically making an exact copy of the tool.poetry.dependencies
into tool.pixi.pypi-dependencies
. Make sure that python
is only defined in the tool.pixi.dependencies
and not in the tool.pixi.pypi-dependencies
.
Mixing pixi
and poetry
It's possible to use poetry
in pixi
environments but this is advised against. Pixi supports PyPI dependencies in a different way than poetry
does, and mixing them can lead to unexpected behavior. As you can only use one package manager at a time, it's best to stick to one.
If using poetry on top of a pixi project, you'll always need to install the poetry
environment after the pixi
environment. And let pixi
handle the python
and poetry
installation.
In this tutorial, we will show you how to create a simple Python project with pixi. We will show some of the features that pixi provides, that are currently not a part of pdm
, poetry
etc.
Pixi builds upon the conda ecosystem, which allows you to create a Python environment with all the dependencies you need. This is especially useful when you are working with multiple Python interpreters and bindings to C and C++ libraries. For example, GDAL from PyPI does not have binary C dependencies, but the conda package does. On the other hand, some packages are only available through PyPI, which pixi
can also install for you. Best of both world, let's give it a go!
pixi.toml
and pyproject.toml
","text":"We support two manifest formats: pyproject.toml
and pixi.toml
. In this tutorial, we will use the pyproject.toml
format because it is the most common format for Python projects.
Let's start out by making a directory and creating a new pyproject.toml
file.
pixi init pixi-py --format pyproject\n
This gives you the following pyproject.toml:
[project]\nname = \"pixi-py\"\nversion = \"0.1.0\"\ndescription = \"Add a short description here\"\nauthors = [{name = \"Tim de Jager\", email = \"tim@prefix.dev\"}]\nrequires-python = \">= 3.11\"\ndependencies = []\n\n[build-system]\nbuild-backend = \"hatchling.build\"\nrequires = [\"hatchling\"]\n\n[tool.pixi.project]\nchannels = [\"conda-forge\"]\nplatforms = [\"osx-arm64\"]\n\n[tool.pixi.pypi-dependencies]\npixi-py = { path = \".\", editable = true }\n\n[tool.pixi.tasks]\n
Let's add the Python project to the tree:
Linux & macOSWindowscd pixi-py # move into the project directory\nmkdir pixi_py\ntouch pixi_py/__init__.py\n
cd pixi-py\nmkdir pixi_py\ntype nul > pixi_py\\__init__.py\n
We now have the following directory structure:
.\n\u251c\u2500\u2500 pixi_py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 __init__.py\n\u2514\u2500\u2500 pyproject.toml\n
We've used a flat-layout here but pixi supports both flat- and src-layouts.
"},{"location":"tutorials/python/#whats-in-the-pyprojecttoml","title":"What's in thepyproject.toml
?","text":"Okay, so let's have a look at what's sections have been added and how we can modify the pyproject.toml
.
These first entries were added to the pyproject.toml
file:
# Main pixi entry\n[tool.pixi.project]\nchannels = [\"conda-forge\"]\n# This is your machine platform by default\nplatforms = [\"osx-arm64\"]\n
The channels
and platforms
are added to the [tool.pixi.project]
section. Channels like conda-forge
manage packages similar to PyPI but allow for different packages across languages. The keyword platforms
determines what platform the project supports.
The pixi_py
package itself is added as an editable dependency. This means that the package is installed in editable mode, so you can make changes to the package and see the changes reflected in the environment, without having to re-install the environment.
# Editable installs\n[tool.pixi.pypi-dependencies]\npixi-py = { path = \".\", editable = true }\n
In pixi, unlike other package managers, this is explicitly stated in the pyproject.toml
file. The main reason being so that you can choose which environment this package should be included in.
Our projects usually depend on other packages.
$ pixi add black\nAdded black\n
This will result in the following addition to the pyproject.toml
:
# Dependencies\n[tool.pixi.dependencies]\nblack = \">=24.4.2,<24.5\"\n
But we can also be strict about the version that should be used with pixi add black=24
, resulting in
[tool.pixi.dependencies]\nblack = \"24.*\"\n
Now, let's add some optional dependencies:
pixi add --pypi --feature test pytest\n
Which results in the following fields added to the pyproject.toml
:
[project.optional-dependencies]\ntest = [\"pytest\"]\n
After we have added the optional dependencies to the pyproject.toml
, pixi automatically creates a feature
, which can contain a collection of dependencies
, tasks
, channels
, and more.
Sometimes there are packages that aren't available on conda channels but are published on PyPI. We can add these as well, which pixi will solve together with the default dependencies.
$ pixi add black --pypi\nAdded black\nAdded these as pypi-dependencies.\n
which results in the addition to the dependencies
key in the pyproject.toml
dependencies = [\"black\"]\n
When using the pypi-dependencies
you can make use of the optional-dependencies
that other packages make available. For example, black
makes the cli
dependencies option, which can be added with the --pypi
keyword:
$ pixi add black[cli] --pypi\nAdded black[cli]\nAdded these as pypi-dependencies.\n
which updates the dependencies
entry to
dependencies = [\"black[cli]\"]\n
Optional dependencies in pixi.toml
This tutorial focuses on the use of the pyproject.toml
, but in case you're curious, the pixi.toml
would contain the following entry after the installation of a PyPI package including an optional dependency:
[pypi-dependencies]\nblack = { version = \"*\", extras = [\"cli\"] }\n
"},{"location":"tutorials/python/#installation-pixi-install","title":"Installation: pixi install
","text":"Now let's install
the project with pixi install
:
$ pixi install\n\u2714 Project in /path/to/pixi-py is ready to use!\n
We now have a new directory called .pixi
in the project root. This directory contains the environment that was created when we ran pixi install
. The environment is a conda environment that contains the dependencies that we specified in the pyproject.toml
file. We can also install the test environment with pixi install -e test
. We can use these environments for executing code.
We also have a new file called pixi.lock
in the project root. This file contains the exact versions of the dependencies that were installed in the environment across platforms.
Using pixi list
, you can see what's in the environment, this is essentially a nicer view on the lock file:
$ pixi list\nPackage Version Build Size Kind Source\nbzip2 1.0.8 h93a5062_5 119.5 KiB conda bzip2-1.0.8-h93a5062_5.conda\nblack 24.4.2 3.8 MiB pypi black-24.4.2-cp312-cp312-win_amd64.http.whl\nca-certificates 2024.2.2 hf0a4a13_0 152.1 KiB conda ca-certificates-2024.2.2-hf0a4a13_0.conda\nlibexpat 2.6.2 hebf3989_0 62.2 KiB conda libexpat-2.6.2-hebf3989_0.conda\nlibffi 3.4.2 h3422bc3_5 38.1 KiB conda libffi-3.4.2-h3422bc3_5.tar.bz2\nlibsqlite 3.45.2 h091b4b1_0 806 KiB conda libsqlite-3.45.2-h091b4b1_0.conda\nlibzlib 1.2.13 h53f4e23_5 47 KiB conda libzlib-1.2.13-h53f4e23_5.conda\nncurses 6.4.20240210 h078ce10_0 801 KiB conda ncurses-6.4.20240210-h078ce10_0.conda\nopenssl 3.2.1 h0d3ecfb_1 2.7 MiB conda openssl-3.2.1-h0d3ecfb_1.conda\npython 3.12.3 h4a7b5fc_0_cpython 12.6 MiB conda python-3.12.3-h4a7b5fc_0_cpython.conda\nreadline 8.2 h92ec313_1 244.5 KiB conda readline-8.2-h92ec313_1.conda\ntk 8.6.13 h5083fa2_1 3 MiB conda tk-8.6.13-h5083fa2_1.conda\ntzdata 2024a h0c530f3_0 117 KiB conda tzdata-2024a-h0c530f3_0.conda\npixi-py 0.1.0 pypi . (editable)\nxz 5.2.6 h57fd34a_0 230.2 KiB conda xz-5.2.6-h57fd34a_0.tar.bz2\n
Python
The Python interpreter is also installed in the environment. This is because the Python interpreter version is read from the requires-python
field in the pyproject.toml
file. This is used to determine the Python version to install in the environment. This way, pixi automatically manages/bootstraps the Python interpreter for you, so no more brew
, apt
or other system install steps.
Here, you can see the different conda and Pypi packages listed. As you can see, the pixi-py
package that we are working on is installed in editable mode. Every environment in pixi is isolated but reuses files that are hard-linked from a central cache directory. This means that you can have multiple environments with the same packages but only have the individual files stored once on disk.
We can create the default
and test
environments based on our own test
feature from the optional-dependency
:
pixi project environment add default --solve-group default\npixi project environment add test --feature test --solve-group default\n
Which results in:
# Environments\n[tool.pixi.environments]\ndefault = { solve-group = \"default\" }\ntest = { features = [\"test\"], solve-group = \"default\" }\n
Solve Groups Solve groups are a way to group dependencies together. This is useful when you have multiple environments that share the same dependencies. For example, maybe pytest
is a dependency that influences the dependencies of the default
environment. By putting these in the same solve group, you ensure that the versions in test
and default
are exactly the same.
The default
environment is created when you run pixi install
. The test
environment is created from the optional dependencies in the pyproject.toml
file. You can execute commands in this environment with e.g. pixi run -e test python
Let's add some code to the pixi-py
package. We will add a new function to the pixi_py/__init__.py
file:
from rich import print\n\ndef hello():\n return \"Hello, [bold magenta]World[/bold magenta]!\", \":vampire:\"\n\ndef say_hello():\n print(*hello())\n
Now add the rich
dependency from PyPI using: pixi add --pypi rich
.
Let's see if this works by running:
pixi r python -c \"import pixi_py; pixi_py.say_hello()\"\nHello, World! \ud83e\udddb\n
Slow? This might be slow(2 minutes) the first time because pixi installs the project, but it will be near instant the second time.
Pixi runs the self installed Python interpreter. Then, we are importing the pixi_py
package, which is installed in editable mode. The code calls the say_hello
function that we just added. And it works! Cool!
Okay, so let's add a test for this function. Let's add a tests/test_me.py
file in the root of the project.
Giving us the following project structure:
.\n\u251c\u2500\u2500 pixi.lock\n\u251c\u2500\u2500 pixi_py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 __init__.py\n\u251c\u2500\u2500 pyproject.toml\n\u2514\u2500\u2500 tests/test_me.py\n
from pixi_py import hello\n\ndef test_pixi_py():\n assert hello() == (\"Hello, [bold magenta]World[/bold magenta]!\", \":vampire:\")\n
Let's add an easy task for running the tests.
$ pixi task add --feature test test \"pytest\"\n\u2714 Added task `test`: pytest .\n
So pixi has a task system to make it easy to run commands. Similar to npm
scripts or something you would specify in a Justfile
.
Tasks are actually a pretty cool pixi feature that is powerful and runs in a cross-platform shell. You can do caching, dependencies and more. Read more about tasks in the tasks section.
$ pixi r test\n\u2728 Pixi task (test): pytest .\n================================================================================================= test session starts =================================================================================================\nplatform darwin -- Python 3.12.2, pytest-8.1.1, pluggy-1.4.0\nrootdir: /private/tmp/pixi-py\nconfigfile: pyproject.toml\ncollected 1 item\n\ntest_me.py . [100%]\n\n================================================================================================== 1 passed in 0.00s =================================================================================================\n
Neat! It seems to be working!
"},{"location":"tutorials/python/#test-vs-default-environment","title":"Test vs Default environment","text":"The interesting thing is if we compare the output of the two environments.
pixi list -e test\n# v.s. default environment\npixi list\n
Is that the test environment has:
package version build size kind source\n...\npytest 8.1.1 1.1 mib pypi pytest-8.1.1-py3-none-any.whl\n...\n
But the default environment is missing this package. This way, you can finetune your environments to only have the packages that are needed for that environment. E.g. you could also have a dev
environment that has pytest
and ruff
installed, but you could omit these from the prod
environment. There is a docker example that shows how to set up a minimal prod
environment and copy from there.
Last thing, pixi provides the ability for pypi
packages to depend on conda
packages. Let's confirm this with pixi list
:
$ pixi list\nPackage Version Build Size Kind Source\n...\npygments 2.17.2 4.1 MiB pypi pygments-2.17.2-py3-none-any.http.whl\n...\n
Let's explicitly add pygments
to the pyproject.toml
file. Which is a dependency of the rich
package.
pixi add pygments\n
This will add the following to the pyproject.toml
file:
[tool.pixi.dependencies]\npygments = \">=2.17.2,<2.18\"\n
We can now see that the pygments
package is now installed as a conda package.
$ pixi list\nPackage Version Build Size Kind Source\n...\npygments 2.17.2 pyhd8ed1ab_0 840.3 KiB conda pygments-2.17.2-pyhd8ed1ab_0.conda\n
This way, PyPI dependencies and conda dependencies can be mixed and matched to seamlessly interoperate.
$ pixi r python -c \"import pixi_py; pixi_py.say_hello()\"\nHello, World! \ud83e\udddb\n
And it still works!
"},{"location":"tutorials/python/#conclusion","title":"Conclusion","text":"In this tutorial, you've seen how easy it is to use a pyproject.toml
to manage your pixi dependencies and environments. We have also explored how to use PyPI and conda dependencies seamlessly together in the same project and install optional dependencies to manage Python packages.
Hopefully, this provides a flexible and powerful way to manage your Python projects and a fertile base for further exploration with Pixi.
Thanks for reading! Happy Coding \ud83d\ude80
Any questions? Feel free to reach out or share this tutorial on X, join our Discord, send us an e-mail or follow our GitHub.
"},{"location":"tutorials/ros2/","title":"Tutorial: Develop a ROS 2 package withpixi
","text":"In this tutorial, we will show you how to develop a ROS 2 package using pixi
. The tutorial is written to be executed from top to bottom, missing steps might result in errors.
The audience for this tutorial is developers who are familiar with ROS 2 and how are interested to try pixi for their development workflow.
"},{"location":"tutorials/ros2/#prerequisites","title":"Prerequisites","text":"pixi
installed. If you haven't installed it yet, you can follow the instructions in the installation guide. The crux of this tutorial is to show you only need pixi!If you're new to pixi, you can check out the basic usage guide. This will teach you the basics of pixi project within 3 minutes.
"},{"location":"tutorials/ros2/#create-a-pixi-project","title":"Create a pixi project.","text":"pixi init my_ros2_project -c robostack-staging -c conda-forge\ncd my_ros2_project\n
It should have created a directory structure like this:
my_ros2_project\n\u251c\u2500\u2500 .gitattributes\n\u251c\u2500\u2500 .gitignore\n\u2514\u2500\u2500 pixi.toml\n
The pixi.toml
file is the manifest file for your project. It should look like this:
[project]\nname = \"my_ros2_project\"\nversion = \"0.1.0\"\ndescription = \"Add a short description here\"\nauthors = [\"User Name <user.name@email.url>\"]\nchannels = [\"robostack-staging\", \"conda-forge\"]\n# Your project can support multiple platforms, the current platform will be automatically added.\nplatforms = [\"linux-64\"]\n\n[tasks]\n\n[dependencies]\n
The channels
you added to the init
command are repositories of packages, you can search in these repositories through our prefix.dev website. The platforms
are the systems you want to support, in pixi you can support multiple platforms, but you have to define which platforms, so pixi can test if those are supported for your dependencies. For the rest of the fields, you can fill them in as you see fit.
To use a pixi project you don't need any dependencies on your system, all the dependencies you need should be added through pixi, so other users can use your project without any issues.
Let's start with the turtlesim
example
pixi add ros-humble-desktop ros-humble-turtlesim\n
This will add the ros-humble-desktop
and ros-humble-turtlesim
packages to your manifest. Depending on your internet speed this might take a minute, as it will also install ROS in your project folder (.pixi
).
Now run the turtlesim
example.
pixi run ros2 run turtlesim turtlesim_node\n
Or use the shell
command to start an activated environment in your terminal.
pixi shell\nros2 run turtlesim turtlesim_node\n
Congratulations you have ROS 2 running on your machine with pixi!
Some more fun with the turtleTo control the turtle you can run the following command in a new terminal
cd my_ros2_project\npixi run ros2 run turtlesim turtle_teleop_key\n
Now you can control the turtle with the arrow keys on your keyboard.
"},{"location":"tutorials/ros2/#add-a-custom-python-node","title":"Add a custom Python node","text":"As ros works with custom nodes, let's add a custom node to our project.
pixi run ros2 pkg create --build-type ament_python --destination-directory src --node-name my_node my_package\n
To build the package we need some more dependencies:
pixi add colcon-common-extensions \"setuptools<=58.2.0\"\n
Add the created initialization script for the ros workspace to your manifest file.
Then run the build command
pixi run colcon build\n
This will create a sourceable script in the install
folder, you can source this script through an activation script to use your custom node. Normally this would be the script you add to your .bashrc
but now you tell pixi to use it.
[activation]\nscripts = [\"install/setup.sh\"]\n
pixi.toml[activation]\nscripts = [\"install/setup.bat\"]\n
Multi platform support You can add multiple activation scripts for different platforms, so you can support multiple platforms with one project. Use the following example to add support for both Linux and Windows, using the target syntax.
[project]\nplatforms = [\"linux-64\", \"win-64\"]\n\n[activation]\nscripts = [\"install/setup.sh\"]\n[target.win-64.activation]\nscripts = [\"install/setup.bat\"]\n
Now you can run your custom node with the following command
pixi run ros2 run my_package my_node\n
"},{"location":"tutorials/ros2/#simplify-the-user-experience","title":"Simplify the user experience","text":"In pixi
we have a feature called tasks
, this allows you to define a task in your manifest file and run it with a simple command. Let's add a task to run the turtlesim
example and the custom node.
pixi task add sim \"ros2 run turtlesim turtlesim_node\"\npixi task add build \"colcon build --symlink-install\"\npixi task add hello \"ros2 run my_package my_node\"\n
Now you can run these task by simply running
pixi run sim\npixi run build\npixi run hello\n
Advanced task usage Tasks are a powerful feature in pixi.
depends-on
to the tasks to create a task chain.cwd
to the tasks to run the task in a different directory from the root of the project.inputs
and outputs
to the tasks to create a task that only runs when the inputs are changed.target
syntax to run specific tasks on specific machines.[tasks]\nsim = \"ros2 run turtlesim turtlesim_node\"\nbuild = {cmd = \"colcon build --symlink-install\", inputs = [\"src\"]}\nhello = { cmd = \"ros2 run my_package my_node\", depends-on = [\"build\"] }\n
"},{"location":"tutorials/ros2/#build-a-c-node","title":"Build a C++ node","text":"To build a C++ node you need to add the ament_cmake
and some other build dependencies to your manifest file.
pixi add ros-humble-ament-cmake-auto compilers pkg-config cmake ninja\n
Now you can create a C++ node with the following command
pixi run ros2 pkg create --build-type ament_cmake --destination-directory src --node-name my_cpp_node my_cpp_package\n
Now you can build it again and run it with the following commands
# Passing arguments to the build command to build with Ninja, add them to the manifest if you want to default to ninja.\npixi run build --cmake-args -G Ninja\npixi run ros2 run my_cpp_package my_cpp_node\n
Tip Add the cpp task to the manifest file to simplify the user experience.
pixi task add hello-cpp \"ros2 run my_cpp_package my_cpp_node\"\n
"},{"location":"tutorials/ros2/#conclusion","title":"Conclusion","text":"In this tutorial, we showed you how to create a Python & CMake ROS2 project using pixi
. We also showed you how to add dependencies to your project using pixi
, and how to run your project using pixi run
. This way you can make sure that your project is reproducible on all your machines that have pixi
installed.
Finished with your project? We'd love to see what you've created! Share your work on social media using the hashtag #pixi and tag us @prefix_dev. Let's inspire the community together!
"},{"location":"tutorials/ros2/#frequently-asked-questions","title":"Frequently asked questions","text":""},{"location":"tutorials/ros2/#what-happens-with-rosdep","title":"What happens withrosdep
?","text":"Currently, we don't support rosdep
in a pixi environment, so you'll have to add the packages using pixi add
. rosdep
will call conda install
which isn't supported in a pixi environment.
pixi
","text":"In this tutorial, we will show you how to develop a Rust package using pixi
. The tutorial is written to be executed from top to bottom, missing steps might result in errors.
The audience for this tutorial is developers who are familiar with Rust and cargo
and how are interested to try pixi for their development workflow. The benefit would be within a rust workflow that you lock both rust and the C/System dependencies your project might be using. E.g tokio users will almost most definitely use openssl
.
If you're new to pixi, you can check out the basic usage guide. This will teach you the basics of pixi project within 3 minutes.
"},{"location":"tutorials/rust/#prerequisites","title":"Prerequisites","text":"pixi
installed. If you haven't installed it yet, you can follow the instructions in the installation guide. The crux of this tutorial is to show you only need pixi!pixi init my_rust_project\ncd my_rust_project\n
It should have created a directory structure like this:
my_rust_project\n\u251c\u2500\u2500 .gitattributes\n\u251c\u2500\u2500 .gitignore\n\u2514\u2500\u2500 pixi.toml\n
The pixi.toml
file is the manifest file for your project. It should look like this:
[project]\nname = \"my_rust_project\"\nversion = \"0.1.0\"\ndescription = \"Add a short description here\"\nauthors = [\"User Name <user.name@email.url>\"]\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\"] # (1)!\n\n[tasks]\n\n[dependencies]\n
platforms
is set to your system's platform by default. You can change it to any platform you want to support. e.g. [\"linux-64\", \"osx-64\", \"osx-arm64\", \"win-64\"]
.To use a pixi project you don't need any dependencies on your system, all the dependencies you need should be added through pixi, so other users can use your project without any issues.
pixi add rust\n
This will add the rust
package to your pixi.toml
file under [dependencies]
. Which includes the rust
toolchain, and cargo
.
cargo
project","text":"Now that you have rust installed, you can create a cargo
project in your pixi
project.
pixi run cargo init\n
pixi run
is pixi's way to run commands in the pixi
environment, it will make sure that the environment is set up correctly for the command to run. It runs its own cross-platform shell, if you want more information checkout the tasks
documentation. You can also activate the environment in your own shell by running pixi shell
, after that you don't need pixi run ...
anymore.
Now we can build a cargo
project using pixi
.
pixi run cargo build\n
To simplify the build process, you can add a build
task to your pixi.toml
file using the following command: pixi task add build \"cargo build\"\n
Which creates this field in the pixi.toml
file: pixi.toml[tasks]\nbuild = \"cargo build\"\n
And now you can build your project using:
pixi run build\n
You can also run your project using:
pixi run cargo run\n
Which you can simplify with a task again. pixi task add start \"cargo run\"\n
So you should get the following output:
pixi run start\nHello, world!\n
Congratulations, you have a Rust project running on your machine with pixi!
"},{"location":"tutorials/rust/#next-steps-why-is-this-useful-when-there-is-rustup","title":"Next steps, why is this useful when there isrustup
?","text":"Cargo is not a binary package manager, but a source-based package manager. This means that you need to have the Rust compiler installed on your system to use it. And possibly other dependencies that are not included in the cargo
package manager. For example, you might need to install openssl
or libssl-dev
on your system to build a package. This is the case for pixi
as well, but pixi
will install these dependencies in your project folder, so you don't have to worry about them.
Add the following dependencies to your cargo project:
pixi run cargo add git2\n
If your system is not preconfigured to build C and have the libssl-dev
package installed you will not be able to build the project:
pixi run build\n...\nCould not find directory of OpenSSL installation, and this `-sys` crate cannot\nproceed without this knowledge. If OpenSSL is installed and this crate had\ntrouble finding it, you can set the `OPENSSL_DIR` environment variable for the\ncompilation process.\n\nMake sure you also have the development packages of openssl installed.\nFor example, `libssl-dev` on Ubuntu or `openssl-devel` on Fedora.\n\nIf you're in a situation where you think the directory *should* be found\nautomatically, please open a bug at https://github.com/sfackler/rust-openssl\nand include information about your system as well as this message.\n\n$HOST = x86_64-unknown-linux-gnu\n$TARGET = x86_64-unknown-linux-gnu\nopenssl-sys = 0.9.102\n\n\nIt looks like you're compiling on Linux and also targeting Linux. Currently this\nrequires the `pkg-config` utility to find OpenSSL but unfortunately `pkg-config`\ncould not be found. If you have OpenSSL installed you can likely fix this by\ninstalling `pkg-config`.\n...\n
You can fix this, by adding the necessary dependencies for building git2, with pixi: pixi add openssl pkg-config compilers\n
Now you should be able to build your project again:
pixi run build\n...\n Compiling git2 v0.18.3\n Compiling my_rust_project v0.1.0 (/my_rust_project)\n Finished dev [unoptimized + debuginfo] target(s) in 7.44s\n Running `target/debug/my_rust_project`\n
"},{"location":"tutorials/rust/#extra-add-more-tasks","title":"Extra: Add more tasks","text":"You can add more tasks to your pixi.toml
file to simplify your workflow.
For example, you can add a test
task to run your tests:
pixi task add test \"cargo test\"\n
And you can add a clean
task to clean your project:
pixi task add clean \"cargo clean\"\n
You can add a formatting task to your project:
pixi task add fmt \"cargo fmt\"\n
You can extend these tasks to run multiple commands with the use of the depends-on
field.
pixi task add lint \"cargo clippy\" --depends-on fmt\n
"},{"location":"tutorials/rust/#conclusion","title":"Conclusion","text":"In this tutorial, we showed you how to create a Rust project using pixi
. We also showed you how to add dependencies to your project using pixi
. This way you can make sure that your project is reproducible on any system that has pixi
installed.
Finished with your project? We'd love to see what you've created! Share your work on social media using the hashtag #pixi and tag us @prefix_dev. Let's inspire the community together!
"},{"location":"CHANGELOG/","title":"Changelog","text":"All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
"},{"location":"CHANGELOG/#0300-2024-09-19","title":"[0.30.0] - 2024-09-19","text":""},{"location":"CHANGELOG/#highlights","title":"\u2728 Highlights","text":"I want to thank @synapticarbors and @abkfenris for starting the work on pixi project export
. Pixi now supports the export of a conda environment.yml
file and a conda explicit specification file. This is a great addition to the project and will help users to share their projects with other non pixi users.
pixi search
by @Hofer-Julian in #2018environment.yml
by @abkfenris in #2003system-requirements
information by @ruben-arts in #2079conda-pypi-map
for feature channels by @nichmor in #2038subdirectory
in pypi url by @ruben-arts in #2065pyproject.toml
by @ruben-arts in #2075strip_channel_alias
from rattler by @Hofer-Julian in #2017psutils
by @Hofer-Julian in #2083pixi init
for pyproject.toml
by @Hofer-Julian in #1947init
add dependencies independent of target and don't install by @ruben-arts in #1916keyrings.artifacts
to the list of project built with pixi
by @jslorrma in #1908INIT_CWD
to activated env pixi run
by @ruben-arts in #1798__linux
default to 4.18
by @ruben-arts in #1887pixi global
by @Hofer-Julian in #1800pixi global
proposal by @Hofer-Julian in #1861channels
required in pixi global
manifest by @Hofer-Julian in #1868rlimit
by @baszalmstra in #1766NamedChannelOrUrl
by @ruben-arts in #1820pixi search
by @baszalmstra in #1849pixi tree -i
duplicate output by @baszalmstra in #1847find-links
with manifest-path by @baszalmstra in #1864.pixi
folder by @baszalmstra in #1866pub(crate) fn
in order to detect and remove unused functions by @Hofer-Julian in #1805TaskNode::full_command
for tests by @Hofer-Julian in #1809Default
for more structs by @Hofer-Julian in #1824get_up_to_date_prefix
to update_prefix
by @Hofer-Julian in #1837HasSpecs
implementation more functional by @Hofer-Julian in #1863init
by @ruben-arts in #1775pixi_spec
crate by @baszalmstra in #1741This release contains a lot of refactoring and improvements to the codebase, in preparation for future features and improvements. Including with that we've fixed a ton of bugs. To make sure we're not breaking anything we've added a lot of tests and CI checks. But let us know if you find any issues!
As a reminder, you can update pixi using pixi self-update
and move to a specific version, including backwards, with pixi self-update --version 0.27.0
.
pixi run
completion for fish
shell by @dennis-wey in #1680pixi init
create hatchling pyproject.toml by @Hofer-Julian in #1693[project]
table optional for pyproject.toml
manifests by @olivier-lacroix in #1732fish
completions location by @tdejager in #1647hatchling
by @Hofer-Julianupdate
command exist by @olivier-lacroix in #1690pixi exec
in GHA docs by @pavelzw in #1724hatchling
is used everywhere in documentation by @olivier-lacroix in #1733pep440_rs
from crates.io and use replace by @baszalmstra in #1698pixi add
with more than just package name and version by @ruben-arts in #1704--no-lockfile-update
by @ruben-arts in #1683pixi.toml
when pyproject.toml
is available. by @ruben-arts in #1640macos-13
by @ruben-arts in #1739activation.env
vars are by @ruben-arts in #1740pixi_manifest
by @baszalmstra in #1656pixi::consts
and pixi::config
into separate crates by @tdejager in #1684pixi_manifest
by @tdejager in #1700HasFeatures
by @tdejager in #1712HasFeatures
trait by @tdejager in #1717utils
by @tdejager in #1718fancy
to its own crate by @tdejager in #1722config
to repodata functions by @tdejager in #1723pypi-mapping
to its own crate by @tdejager in #1725utils
into 2 crates by @tdejager in #1736pixi_manifest
lib by @tdejager in #1661pinning-strategy
in the config. e.g. semver
-> >=1.2.3,<2
and no-pin
-> *
) #1516channel-priority
in the manifest. #1631pinning-strategy
to the configuration by @ruben-arts in #1516channel-priority
to the manifest and solve by @ruben-arts in #1631nushell
completion by @Hofer-Julian in #1599nushell
completions for pixi run
by @Hofer-Julian in #1627pixi run --environment
for nushell by @Hofer-Julian in #1636pyproject.toml
parser by @nichmor in #1592pixi global install
by @ruben-arts in #1626cargo add
by @Hofer-Julian in #1600cargo add
\" by @Hofer-Julian in #1605poetry
and conda
by @ruben-arts in #1624clap_complete_nushell
to dependencies by @Hofer-Julian in #1625stdout
for machine readable output by @Hofer-Julian in #1639pixi exec
command, execute commands in temporary environments, useful for testing in short-lived sessions.system-requirements
table, this is explained heregeos-rs
by @Hofer-Julian in #1563pixi shell
by @ruben-arts in #1507unix
machines, using pixi run --clean-env TASK_NAME
.pixi clean
or the cache with pixi clean cache
pixi clean
command by @ruben-arts in #1325--clean-env
flag to tasks and run command by @ruben-arts in #1395description
field to task
by @jjjermiah in #1479list_global_packages
by @dhirschfeld in #1458pixi info
by @ruben-arts in #1459--frozen
by @ruben-arts in #1468pixi install --all
output missing newline by @vigneshmanick in #1487Full commit history
"},{"location":"CHANGELOG/#0230-2024-05-27","title":"[0.23.0] - 2024-05-27","text":""},{"location":"CHANGELOG/#highlights_7","title":"\u2728 Highlights","text":"pixi config
and pixi update
pixi config
allows you to edit
, set
, unset
, append
, prepend
and list
your local/global or system configuration.pixi update
re-solves the full lockfile or use pixi update PACKAGE
to only update PACKAGE
, making sure your project is using the latest versions that the manifest allows for.pixi config
command by @chawyehsu in #1339pixi list --explicit
flag command by @jjjermiah in #1403[activation.env]
table for environment variables by @ruben-arts in #1156--all
at once by @tdejager in #1413pixi update
command to re-solve the lockfile by @baszalmstra in #1431 (fixes 20 :thumbsup:)detached-environments
to the config, move environments outside the project folder by @ruben-arts in #1381 (fixes 11 :thumbsup:)remove
arguments with add
by @olivier-lacroix in #1406--no-lockfile-update
. by @tobiasraabe in #1396Full commit history
"},{"location":"CHANGELOG/#0220-2024-05-13","title":"[0.22.0] - 2024-05-13","text":""},{"location":"CHANGELOG/#highlights_8","title":"\u2728 Highlights","text":"pixi add --pypi 'package @ package.whl'
, perfect for adding just build wheels to your environment in CI.pixi add --pypi 'package_from_git @ git+https://github.com/org/package.git'
, to add a package from a git repository.pixi add --pypi 'package_from_path @ file:///path/to/package' --editable
, to add a package from a local path.pixi add --pypi
by @wolfv in #1244install
cli doc by @vigneshmanick in #1336pixi project help
by @notPlancha in #1358pypi
dependencies. by @ruben-arts in #1366Full commit history
"},{"location":"CHANGELOG/#0211-2024-05-07","title":"[0.21.1] - 2024-05-07","text":""},{"location":"CHANGELOG/#fixed_15","title":"Fixed","text":"Full commit history
"},{"location":"CHANGELOG/#0210-2024-05-06","title":"[0.21.0] - 2024-05-06","text":""},{"location":"CHANGELOG/#highlights_9","title":"\u2728 Highlights","text":"osx-64
on osx-arm64
and wasm
environments.no-default-feature
option to simplify usage of environments.osx-64
on osx-arm64
and wasm
environments by @wolfv in #1020no-default-feature
option to environments by @olivier-lacroix in #1092/etc/pixi/config.toml
to global configuration search paths by @pavelzw in #1304task list
by @Hoxbro in #1286depends_on
to depends-on
by @ruben-arts in #1310pixi q
instead of only name by @ruben-arts in #1314rattler
by @baszalmstra in #1327Full commit history
"},{"location":"CHANGELOG/#0201-2024-04-26","title":"[0.20.1] - 2024-04-26","text":""},{"location":"CHANGELOG/#highlights_10","title":"\u2728 Highlights","text":"schema.json
normalization, add to docs by @bollwyvl in #1265Full commit history
"},{"location":"CHANGELOG/#0200-2024-04-19","title":"[0.20.0] - 2024-04-19","text":""},{"location":"CHANGELOG/#highlights_11","title":"\u2728 Highlights","text":"env
variables in the task
definition, these can also be used as default values for parameters in your task which you can overwrite with your shell's env variables. e.g. task = { cmd = \"task to run\", env = { VAR=\"value1\", PATH=\"my/path:$PATH\" } }
env
to the tasks to specify tasks specific environment variables by @wolfv in https://github.com/prefix-dev/pixi/pull/972--pyproject
option to pixi init
with a pyproject.toml by @olivier-lacroix in #1188pixi.lock
by @ruben-arts in #1209priority
definition by @ruben-arts in #1234--no-deps
when pip installing in editable mode by @glemaitre in #1220_
with -
when creating environments from features by @wolfv in #1203task = { cmd = \"task to run\", cwd = \"folder\", inputs = \"input.txt\", output = \"output.txt\"}
Where input.txt
and output.txt
where previously in folder
they are now relative the project root. This changed in: #1202task = { cmd = \"task to run\", inputs = \"input.txt\"}
previously searched for all input.txt
files now only for the ones in the project root. This changed in: #1204Full commit history
"},{"location":"CHANGELOG/#0191-2024-04-11","title":"[0.19.1] - 2024-04-11","text":""},{"location":"CHANGELOG/#highlights_12","title":"\u2728 Highlights","text":"This fixes the issue where pixi would generate broken environments/lockfiles when a mapping for a brand-new version of a package is missing.
"},{"location":"CHANGELOG/#changed_12","title":"Changed","text":"Full commit history
"},{"location":"CHANGELOG/#0190-2024-04-10","title":"[0.19.0] - 2024-04-10","text":""},{"location":"CHANGELOG/#highlights_13","title":"\u2728 Highlights","text":"pixi tree
command to show the dependency tree of the project.pixi tree
command to show dependency tree by @abkfenris in #1069pixi add --feature test --pypi package
) by @ruben-arts in #1135--no-progress
to disable all progress bars by @baszalmstra in #1105pixi add conda-forge::rattler-build
) by @baszalmstra in #1079tool.pixi.project.name
from project.name
by @olivier-lacroix in #1112features
and environments
from extras by @olivier-lacroix in #1077PIXI_ARCH
for pixi installation by @beenje in #1129tree
and list
commands by @ruben-arts in #1145conda-meta/history
to prevent conda.history.History.parse()
error by @jaimergp in #1117pyproject.toml
by @tdejager in #1121Full commit history
"},{"location":"CHANGELOG/#0180-2024-04-02","title":"[0.18.0] - 2024-04-02","text":""},{"location":"CHANGELOG/#highlights_14","title":"\u2728 Highlights","text":"pyproject.toml
, now pixi reads from the [tool.pixi]
table.git
, path
, and url
dependencies.[!TIP] These new features are part of the ongoing effort to make pixi more flexible, powerful, and comfortable for the python users. They are still in progress so expect more improvements on these features soon, so please report any issues you encounter and follow our next releases!
"},{"location":"CHANGELOG/#added_11","title":"Added","text":"pyproject.toml
by @olivier-lacroix in #999XDG_CONFIG_HOME
and XDG_CACHE_HOME
compliance by @chawyehsu in #1050zsh
may be used for installation on macOS by @pya in #1091pixi auth
documentation by @ytausch in #1076rstudio
to the IDE integration docs by @wolfv in #1144Full commit history
"},{"location":"CHANGELOG/#0171-2024-03-21","title":"[0.17.1] - 2024-03-21","text":""},{"location":"CHANGELOG/#highlights_15","title":"\u2728 Highlights","text":"A quick bug-fix release for pixi list
.
pixi list
by @baszalmstra in #1033pixi global
commands, thanks to @chawyehsu!task
execution thanks to caching \ud83d\ude80 Tasks that already executed successfully can be skipped based on the hash of the inputs
and outputs
.inputs
and outputs
hash based task skipping by @wolfv in #933pixi search
with platform selection and making limit optional by @wolfv in #979watch_file
in direnv usage by @pavelzw in #983linenums
to avoid buggy visualization by @ruben-arts in #1002install.sh
in Git Bash by @jdblischak in #966json
entries by @wolfv in #971tool
to strict json schema by @ruben-arts in #969Full commit history
"},{"location":"CHANGELOG/#0161-2024-03-11","title":"[0.16.1] - 2024-03-11","text":""},{"location":"CHANGELOG/#fixed_23","title":"Fixed","text":"0.16.0
by @ruben-arts in #951Full commit history
"},{"location":"CHANGELOG/#0160-2024-03-09","title":"[0.16.0] - 2024-03-09","text":""},{"location":"CHANGELOG/#highlights_17","title":"\u2728 Highlights","text":"rip
and add uv
as the PyPI resolver and installer.Full Commit history
"},{"location":"CHANGELOG/#0152-2024-02-29","title":"[0.15.2] - 2024-02-29","text":""},{"location":"CHANGELOG/#changed_17","title":"Changed","text":"v0.19.0
by @AliPiccioniQC in #885pixi run
if platform is not supported by @ruben-arts in #878Full commit history
"},{"location":"CHANGELOG/#0151-2024-02-26","title":"[0.15.1] - 2024-02-26","text":""},{"location":"CHANGELOG/#added_14","title":"Added","text":"pixi global list
display format by @chawyehsu in #723init --import
by @ruben-arts in #855Full commit history
"},{"location":"CHANGELOG/#0150-2024-02-23","title":"[0.15.0] - 2024-02-23","text":""},{"location":"CHANGELOG/#highlights_18","title":"\u2728 Highlights","text":"[pypi-dependencies]
now get build in the created environment so it uses the conda installed build tools.pixi init --import env.yml
to import an existing conda environment file.[target.unix.dependencies]
to specify dependencies for unix systems instead of per platform.[!WARNING] This versions build failed, use v0.15.1
--feature
to pixi add
(#803)PIXI_NO_PATH_UPDATE
variable (#822)mike
to the documentation and update looks (#809)self-update
(#823)jlap
for now (#836)Full commit history
"},{"location":"CHANGELOG/#0140-2024-02-15","title":"[0.14.0] - 2024-02-15","text":""},{"location":"CHANGELOG/#highlights_19","title":"\u2728 Highlights","text":"Now, solve-groups
can be used in [environments]
to ensure dependency alignment across different environments without simultaneous installation. This feature is particularly beneficial for managing identical dependencies in test
and production
environments. Example configuration:
[environments]\ntest = { features = [\"prod\", \"test\"], solve-groups = [\"group1\"] }\nprod = { features = [\"prod\"], solve-groups = [\"group1\"] }\n
This setup simplifies managing dependencies that must be consistent across test
and production
."},{"location":"CHANGELOG/#added_16","title":"Added","text":"-f
/--feature
to the pixi project platform
command by @ruben-arts in #785pixi list
by @ruben-arts in #775shell-hook
by @ruben-arts in https://github.com/prefix-dev/pixi/pull/811sdist
with direct references by @nichmor in https://github.com/prefix-dev/pixi/pull/813environments
by @ruben-arts in #790Full commit history
"},{"location":"CHANGELOG/#0130-2024-02-01","title":"[0.13.0] - 2024-02-01","text":""},{"location":"CHANGELOG/#highlights_20","title":"\u2728 Highlights","text":"This release is pretty crazy in amount of features! The major ones are: - We added support for multiple environments. :tada: Checkout the documentation - We added support for sdist
installation, which greatly improves the amount of packages that can be installed from PyPI. :rocket:
[!IMPORTANT]
Renaming of PIXI_PACKAGE_*
variables:
PIXI_PACKAGE_ROOT -> PIXI_PROJECT_ROOT\nPIXI_PACKAGE_NAME -> PIXI_PROJECT_NAME\nPIXI_PACKAGE_MANIFEST -> PIXI_PROJECT_MANIFEST\nPIXI_PACKAGE_VERSION -> PIXI_PROJECT_VERSION\nPIXI_PACKAGE_PLATFORMS -> PIXI_ENVIRONMENT_PLATFORMS\n
Check documentation here: https://pixi.sh/environment/ [!IMPORTANT]
The .pixi/env/
folder has been moved to accommodate multiple environments. If you only have one environment it is now named .pixi/envs/default
.
polarify
use-case as an example by @ruben-arts in #735pixi info -e/--environment
option by @ruben-arts in #676pixi channel add -f/--feature
option by @ruben-arts in #700pixi channel remove -f/--feature
option by @ruben-arts in #706pixi remove -f/--feature
option by @ruben-arts in #680pixi task list -e/--environment
option by @ruben-arts in #694pixi task remove -f/--feature
option by @ruben-arts in #694pixi install -e/--environment
option by @ruben-arts in #722pypi-dependencies
by @tdejager in #664pypi-dependencies
by @tdejager in #716pixi list
command by @hadim in #665pixi shell-hook
command by @orhun in #672#679 #684pixi self-update
by @hadim in #675PIXI_NO_PATH_UPDATE
for PATH update suppression by @chawyehsu in #692PyPiRequirement
by @orhun in #744tabwriter
instead of comfy_table
by @baszalmstra in #745[ or ]
) by @JafarAbdi in #677__pycache__
removal issues by @wolfv in #573pixi search
result correct by @chawyehsu in #713pixi info
by @ruben-arts in #728Full commit history
"},{"location":"CHANGELOG/#0120-2024-01-15","title":"[0.12.0] - 2024-01-15","text":""},{"location":"CHANGELOG/#highlights_21","title":"\u2728 Highlights","text":"pixi global upgrade
, pixi project version
commands, a PIXI_HOME
variable.pixi.toml
file already.global upgrade
command to pixi by @trueleo in #614PIXI_HOME
by @chawyehsu in #627--pypi
option to pixi remove
by @marcelotrevisani in https://github.com/prefix-dev/pixi/pull/602project version {major,minor,patch}
CLIs by @hadim in https://github.com/prefix-dev/pixi/pull/633Project
to Environment
by @baszalmstra in #630system-requirements
from Environment by @baszalmstra in #632activation.scripts
into Environment by @baszalmstra in #659pypi-dependencies
from Environment by @baszalmstra in https://github.com/prefix-dev/pixi/pull/656features
and environments
by @ruben-arts in https://github.com/prefix-dev/pixi/pull/636windows
and unix
system requirements by @baszalmstra in https://github.com/prefix-dev/pixi/pull/635CODE_OF_CONDUCT.md
by @ruben-arts in https://github.com/prefix-dev/pixi/pull/648Full Changelog: https://github.com/prefix-dev/pixi/compare/v0.11.0...v0.12.0
"},{"location":"CHANGELOG/#0111-2024-01-06","title":"[0.11.1] - 2024-01-06","text":""},{"location":"CHANGELOG/#fixed_31","title":"Fixed","text":"pixi auth
in #642sdist
and multi environment featurepixi
improve!pixi project {version|channel|platform|description}
by @hadim in #579winget-releaser
gets correct identifier by @ruben-arts in #561system-requirements
by @ruben-arts in #595Full Changelog: https://github.com/prefix-dev/pixi/compare/v0.10.0...v0.11.0
"},{"location":"CHANGELOG/#0100-2023-12-8","title":"[0.10.0] - 2023-12-8","text":""},{"location":"CHANGELOG/#highlights_23","title":"Highlights","text":"pypi-dependencies
support, now install even more of the pypi packages.pixi add --pypi
command to add a pypi package to your project.>=1.2.3, <1.3
) when adding requirement, instead of 1.2.3.*
by @baszalmstra in https://github.com/prefix-dev/pixi/pull/536rip
to fix by @tdejager in https://github.com/prefix-dev/pixi/pull/543.pyc
) support by @baszalmstra.data
directory headers
by @baszalmstrapixi add --pypi
command by @ruben-arts in https://github.com/prefix-dev/pixi/pull/539build
and host
specs while getting the best version by @ruben-arts in https://github.com/prefix-dev/pixi/pull/538winget
releaser by @ruben-arts in https://github.com/prefix-dev/pixi/pull/547rerun-sdk
example, force driven graph of pixi.lock
by @ruben-arts in https://github.com/prefix-dev/pixi/pull/548Full Changelog: https://github.com/prefix-dev/pixi/compare/v0.9.1...v0.10.0
"},{"location":"CHANGELOG/#091-2023-11-29","title":"[0.9.1] - 2023-11-29","text":""},{"location":"CHANGELOG/#highlights_24","title":"Highlights","text":"scripts
are now fixed. For example: https://github.com/prefix-dev/pixi/issues/516rip
to add scripts by @baszalmstra in https://github.com/prefix-dev/pixi/pull/517Full Changelog: https://github.com/prefix-dev/pixi/compare/v0.9.0...v0.9.1
"},{"location":"CHANGELOG/#090-2023-11-28","title":"[0.9.0] - 2023-11-28","text":""},{"location":"CHANGELOG/#highlights_25","title":"Highlights","text":"pixi remove
, pixi rm
to remove a package from the environmentpip install -e
issue that was created by release v0.8.0
: https://github.com/prefix-dev/pixi/issues/507pixi remove
command by @Wackyator in https://github.com/prefix-dev/pixi/pull/483[pypi-dependencies]
@baszalmstra in https://github.com/prefix-dev/pixi/pull/508Full Changelog: https://github.com/prefix-dev/pixi/compare/v0.8.0...v0.9.0
"},{"location":"CHANGELOG/#080-2023-11-27","title":"[0.8.0] - 2023-11-27","text":""},{"location":"CHANGELOG/#highlights_26","title":"Highlights","text":"[pypi-dependencies]
ALPHA RELEASE\ud83d\udc0d\ud83c\udf89, you can now add PyPI dependencies to your pixi project.pixi run
has been improved with better errors and showing what task is run.[!NOTE] [pypi-dependencies]
support is still incomplete, missing functionality is listed here: https://github.com/orgs/prefix-dev/projects/6. Our intent is not to have 100% feature parity with pip
, our goal is that you only need pixi
for both conda and pypi packages alike.
rattler
@ruben-arts in https://github.com/prefix-dev/pixi/pull/496pypi-dependencies
by @baszalmstra in https://github.com/prefix-dev/pixi/pull/494command not found
is returned by @ruben-arts in https://github.com/prefix-dev/pixi/pull/488pixi.sh
by @ruben-arts in https://github.com/prefix-dev/pixi/pull/458 && https://github.com/prefix-dev/pixi/pull/459 && https://github.com/prefix-dev/pixi/pull/460RECORD not found
issue by @baszalmstra in https://github.com/prefix-dev/pixi/pull/495.gitignore
and give better errors by @ruben-arts in https://github.com/prefix-dev/pixi/pull/490pypi-dependencies
by @baszalmstra in https://github.com/prefix-dev/pixi/pull/478pypi-dependencies
type by @ruben-arts in https://github.com/prefix-dev/pixi/pull/471pypi-dependencies
parsing errors by @ruben-arts in https://github.com/prefix-dev/pixi/pull/479ctypes
by @liquidcarbon in https://github.com/prefix-dev/pixi/pull/441rerun
example by @ruben-arts in https://github.com/prefix-dev/pixi/pull/489pypi-dependencies
by @ruben-arts in https://github.com/prefix-dev/pixi/pull/481Full Changelog: https://github.com/prefix-dev/pixi/compare/v0.7.0...v0.8.0
"},{"location":"CHANGELOG/#070-2023-11-14","title":"[0.7.0] - 2023-11-14","text":""},{"location":"CHANGELOG/#highlights_27","title":"Highlights","text":"channels = [\"conda-forge\", \"pytorch\"]
All packages found in conda-forge will not be taken from pytorch.pytorch = { version=\"*\", channel=\"pytorch\"}
pixi run <TABTAB>
pixi run docs
!pixi run
for bash
and zsh
by @ruben-arts in https://github.com/prefix-dev/pixi/pull/390python = { version = \"*\" channel=\"conda-forge\" }
by @ruben-arts in https://github.com/prefix-dev/pixi/pull/439project.version
as optional field in the pixi.toml
by @ruben-arts in https://github.com/prefix-dev/pixi/pull/400pixi.toml
to help users find errors by @ruben-arts in https://github.com/prefix-dev/pixi/pull/396install.sh
to create dot file if not present by @humphd in https://github.com/prefix-dev/pixi/pull/408task list
by @ruben-arts in https://github.com/prefix-dev/pixi/pull/431global install
path on windows by @ruben-arts in https://github.com/prefix-dev/pixi/pull/449PIXI_BIN_PATH
use backslashes by @Hofer-Julian in https://github.com/prefix-dev/pixi/pull/442mkdocs
with all documentation by @ruben-arts in https://github.com/prefix-dev/pixi/pull/435Full Changelog: https://github.com/prefix-dev/pixi/compare/v0.6.0...v0.7.0
"},{"location":"CHANGELOG/#060-2023-10-17","title":"[0.6.0] - 2023-10-17","text":""},{"location":"CHANGELOG/#highlights_28","title":"Highlights","text":"This release fixes some bugs and adds the --cwd
option to the tasks.
--frozen
logic to error when there is no lockfile by @ruben-arts in https://github.com/prefix-dev/pixi/pull/373rerun
example to v0.9.1 by @ruben-arts in https://github.com/prefix-dev/pixi/pull/389--cwd
) in pixi tasks
by @ruben-arts in https://github.com/prefix-dev/pixi/pull/380Full Changelog: https://github.com/prefix-dev/pixi/compare/v0.5.0...v0.6.0
"},{"location":"CHANGELOG/#050-2023-10-03","title":"[0.5.0] - 2023-10-03","text":""},{"location":"CHANGELOG/#highlights_29","title":"Highlights","text":"We rebuilt pixi shell
, fixing the fact that your rc
file would overrule the environment activation.
shell
works and make activation more robust by @wolfv in https://github.com/prefix-dev/pixi/pull/316.gitignore
and .gitattributes
files by @ruben-arts in https://github.com/prefix-dev/pixi/pull/359--locked
and --frozen
to getting an up-to-date prefix by @ruben-arts in https://github.com/prefix-dev/pixi/pull/363pixi
by @ruben-arts in https://github.com/prefix-dev/pixi/pull/353 & https://github.com/prefix-dev/pixi/pull/365cargo upgrade --all --incompatible
by @wolfv in https://github.com/prefix-dev/pixi/pull/358Full Changelog: https://github.com/prefix-dev/pixi/compare/v0.4.0...v0.5.0
"},{"location":"CHANGELOG/#040-2023-09-22","title":"[0.4.0] - 2023-09-22","text":""},{"location":"CHANGELOG/#highlights_30","title":"Highlights","text":"This release adds the start of a new cli command pixi project
which will allow users to interact with the project configuration from the command line.
0.9.0
by @ruben-arts in https://github.com/prefix-dev/pixi/pull/350xtsci-dist
to Community.md by @HaoZeke in https://github.com/prefix-dev/pixi/pull/339ribasim
to Community.md by @Hofer-Julian in https://github.com/prefix-dev/pixi/pull/340LFortran
to Community.md by @wolfv in https://github.com/prefix-dev/pixi/pull/341pixi project channel add
subcommand by @baszalmstra and @ruben-arts in https://github.com/prefix-dev/pixi/pull/347Full Changelog: https://github.com/prefix-dev/pixi/compare/v0.3.0...v0.4.0
"},{"location":"CHANGELOG/#030-2023-09-11","title":"[0.3.0] - 2023-09-11","text":""},{"location":"CHANGELOG/#highlights_31","title":"Highlights","text":"This releases fixes a lot of issues encountered by the community as well as some awesome community contributions like the addition of pixi global list
and pixi global remove
.
system-requirements
are properly filtered by platform, by @ruben-arts (#299)thread 'tokio-runtime-worker' has overflowed its stack
issue, by @baszalmstra (#28)pixi global list
and pixi global remove
commands, by @cjfuller (#318)--manifest-path
must point to a pixi.toml
file, by @baszalmstra (#324)pixi search
command to search for packages, by @Wackyator. (#244)[target.win-64.tasks]
, by @ruben-arts. (#269)pixi.lock
automatically, by @spenserblack. (#265)As this is our first Semantic Versioning release, we'll change from the prototype to the developing phase, as semver describes. A 0.x release could be anything from a new major feature to a breaking change where the 0.0.x releases will be bugfixes or small improvements.
"},{"location":"CHANGELOG/#highlights_33","title":"Highlights","text":"unix
platforms, by @baszalmstra. (#250)miette
, by @baszalmstra. (#211)aarch64-linux
, by @pavelzw. (#233)libsolv
as the default solver, by @ruben-arts. (#209)condax
in the docs, by @maresb. (#207)brew
installation instructions, by @wolfv. (#208)activation.scripts
to the pixi.toml
to configure environment activation, by @ruben-arts. (#217)pixi upload
command to upload packages to prefix.dev
, by @wolfv. (#127)pixi.toml
, by @wolfv. (#218)pixi task list
to show all tasks in the project, by @tdejager. (#228)--color
to configure the colors in the output, by @baszalmstra. (#243)pixi.toml
and .gitignore
, by @pavelzw. (#216)pixi.toml
, by @wolfv. (#220)PS1
variable when going into a pixi shell
, by @ruben-arts. (#201)pixi add
, by @baszalmstra. (#213)run
subcommand to use the deno_task_shell
for improved cross-platform functionality. More details in the Deno Task Runner documentation.info
subcommand to retrieve system-specific information understood by pixi
.[commands]
in the pixi.toml
is now called [tasks]
. (#177)pixi info
command to get more system information by @wolfv in (#158)deno_task_shell
to execute commands in pixi run
by @baszalmstra in (#173)pixi command
command to the cli by @tdejager in (#177)pixi auth
command by @wolfv in (#183)depends_on
by @tdejager in (#161)PATH
variable where it is already set by @baszalmstra in (#169)pixi run
by @tdejager in (#190)Improving the reliability is important to us, so we added an integration testing framework, we can now test as close as possible to the CLI level using cargo
.
pixi build
, allowing host-
and build-
dependencies
(#149)Fixing Windows installer build in CI. (#145)
"},{"location":"CHANGELOG/#004-2023-06-26","title":"[0.0.4] - 2023-06-26","text":""},{"location":"CHANGELOG/#highlights_37","title":"Highlights","text":"A new command, auth
which can be used to authenticate the host of the package channels. A new command, shell
which can be used to start a shell in the pixi environment of a project. A refactor of the install
command which is changed to global install
and the install
command now installs a pixi project if you run it in the directory. Platform specific dependencies using [target.linux-64.dependencies]
instead of [dependencies]
in the pixi.toml
Lots and lots of fixes and improvements to make it easier for this user, where bumping to the new version of rattler
helped a lot.
pixi.toml
issues(#111)shell
command to use the pixi environment without pixi run
. (#116)-v, -vv, -vvv
(#118)auth
command to be able to login or logout of a host like repo.prefix.dev
if you're using private channels. (#120)pixi install
moved to pixi global install
and pixi install
became the installation of a project using the pixi.toml
(#124)pixi run
uses default shell (#119)pixi add
command is fixed. (#132)Pixi is a package management tool for developers. It allows the developer to install libraries and applications in a reproducible way. Use pixi cross-platform, on Windows, Mac and Linux.
"},{"location":"#installation","title":"Installation","text":"To install pixi
you can run the following command in your terminal:
curl -fsSL https://pixi.sh/install.sh | bash\n
The above invocation will automatically download the latest version of pixi
, extract it, and move the pixi
binary to ~/.pixi/bin
. If this directory does not already exist, the script will create it.
The script will also update your ~/.bash_profile
to include ~/.pixi/bin
in your PATH, allowing you to invoke the pixi
command from anywhere.
PowerShell
:
iwr -useb https://pixi.sh/install.ps1 | iex\n
winget
: winget install prefix-dev.pixi\n
The above invocation will automatically download the latest version of pixi
, extract it, and move the pixi
binary to LocalAppData/pixi/bin
. If this directory does not already exist, the script will create it. The command will also automatically add LocalAppData/pixi/bin
to your path allowing you to invoke pixi
from anywhere.
Tip
You might need to restart your terminal or source your shell for the changes to take effect.
You can find more options for the installation script here.
"},{"location":"#autocompletion","title":"Autocompletion","text":"To get autocompletion follow the instructions for your shell. Afterwards, restart the shell or source the shell config file.
"},{"location":"#bash-default-on-most-linux-systems","title":"Bash (default on most Linux systems)","text":"echo 'eval \"$(pixi completion --shell bash)\"' >> ~/.bashrc\n
"},{"location":"#zsh-default-on-macos","title":"Zsh (default on macOS)","text":"echo 'eval \"$(pixi completion --shell zsh)\"' >> ~/.zshrc\n
"},{"location":"#powershell-pre-installed-on-all-windows-systems","title":"PowerShell (pre-installed on all Windows systems)","text":"Add-Content -Path $PROFILE -Value '(& pixi completion --shell powershell) | Out-String | Invoke-Expression'\n
Failure because no profile file exists
Make sure your profile file exists, otherwise create it with:
New-Item -Path $PROFILE -ItemType File -Force\n
"},{"location":"#fish","title":"Fish","text":"echo 'pixi completion --shell fish | source' > ~/.config/fish/completions/pixi.fish\n
"},{"location":"#nushell","title":"Nushell","text":"Add the following to the end of your Nushell env file (find it by running $nu.env-path
in Nushell):
mkdir ~/.cache/pixi\npixi completion --shell nushell | save -f ~/.cache/pixi/completions.nu\n
And add the following to the end of your Nushell configuration (find it by running $nu.config-path
):
use ~/.cache/pixi/completions.nu *\n
"},{"location":"#elvish","title":"Elvish","text":"echo 'eval (pixi completion --shell elvish | slurp)' >> ~/.elvish/rc.elv\n
"},{"location":"#alternative-installation-methods","title":"Alternative installation methods","text":"Although we recommend installing pixi through the above method we also provide additional installation methods.
"},{"location":"#homebrew","title":"Homebrew","text":"Pixi is available via homebrew. To install pixi via homebrew simply run:
brew install pixi\n
"},{"location":"#windows-installer","title":"Windows installer","text":"We provide an msi
installer on our GitHub releases page. The installer will download pixi and add it to the path.
pixi is 100% written in Rust, and therefore it can be installed, built and tested with cargo. To start using pixi from a source build run:
cargo install --locked --git https://github.com/prefix-dev/pixi.git pixi\n
We don't publish to crates.io
anymore, so you need to install it from the repository. The reason for this is that we depend on some unpublished crates which disallows us to publish to crates.io
.
or when you want to make changes use:
cargo build\ncargo test\n
If you have any issues building because of the dependency on rattler
checkout its compile steps.
The installation script has several options that can be manipulated through environment variables.
Variable Description Default ValuePIXI_VERSION
The version of pixi getting installed, can be used to up- or down-grade. latest
PIXI_HOME
The location of the binary folder. $HOME/.pixi
PIXI_ARCH
The architecture the pixi version was built for. uname -m
PIXI_NO_PATH_UPDATE
If set the $PATH
will not be updated to add pixi
to it. TMP_DIR
The temporary directory the script uses to download to and unpack the binary from. /tmp
For example, on Apple Silicon, you can force the installation of the x86 version:
curl -fsSL https://pixi.sh/install.sh | PIXI_ARCH=x86_64 bash\n
Or set the version curl -fsSL https://pixi.sh/install.sh | PIXI_VERSION=v0.18.0 bash\n
The installation script has several options that can be manipulated through environment variables.
Variable Environment variable Description Default ValuePixiVersion
PIXI_VERSION
The version of pixi getting installed, can be used to up- or down-grade. latest
PixiHome
PIXI_HOME
The location of the installation. $Env:USERPROFILE\\.pixi
NoPathUpdate
If set, the $PATH
will not be updated to add pixi
to it. For example, set the version using:
iwr -useb https://pixi.sh/install.ps1 | iex -Args \"-PixiVersion v0.18.0\"\n
"},{"location":"#update","title":"Update","text":"Updating is as simple as installing, rerunning the installation script gets you the latest version.
pixi self-update\n
Or get a specific pixi version using: pixi self-update --version x.y.z\n
Note
If you've used a package manager like brew
, mamba
, conda
, paru
etc. to install pixi
. It's preferable to use the built-in update mechanism. e.g. brew upgrade pixi
.
To uninstall pixi from your system, simply remove the binary.
Linux & macOSWindowsrm ~/.pixi/bin/pixi\n
$PIXI_BIN = \"$Env:LocalAppData\\pixi\\bin\\pixi\"; Remove-Item -Path $PIXI_BIN\n
After this command, you can still use the tools you installed with pixi. To remove these as well, just remove the whole ~/.pixi
directory and remove the directory from your path.
When you want to show your users and contributors that they can use pixi in your repo, you can use the following badge:
[![Pixi Badge](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/prefix-dev/pixi/main/assets/badge/v0.json)](https://pixi.sh)\n
Customize your badge
To further customize the look and feel of your badge, you can add &style=<custom-style>
at the end of the URL. See the documentation on shields.io for more info.
scipy
port using xtensor
conda
, mamba
, poetry
, pip
","text":"Tool Installs python Builds packages Runs predefined tasks Has lock files builtin Fast Use without python Conda \u2705 \u274c \u274c \u274c \u274c \u274c Mamba \u2705 \u274c \u274c \u274c \u2705 \u2705 Pip \u274c \u2705 \u274c \u274c \u274c \u274c Pixi \u2705 \ud83d\udea7 \u2705 \u2705 \u2705 \u2705 Poetry \u274c \u2705 \u274c \u2705 \u274c \u274c"},{"location":"FAQ/#why-the-name-pixi","title":"Why the name pixi
","text":"Starting with the name prefix
we iterated until we had a name that was easy to pronounce, spell and remember. There also wasn't a cli tool yet using that name. Unlike px
, pex
, pax
, etc. We think it sparks curiosity and fun, if you don't agree, I'm sorry, but you can always alias it to whatever you like.
alias not_pixi=\"pixi\"\n
PowerShell:
New-Alias -Name not_pixi -Value pixi\n
"},{"location":"FAQ/#where-is-pixi-build","title":"Where is pixi build
","text":"TL;DR: It's coming we promise!
pixi build
is going to be the subcommand that can generate a conda package out of a pixi project. This requires a solid build tool which we're creating with rattler-build
which will be used as a library in pixi.
Ensure you've got pixi
set up. If running pixi
doesn't show the help, see the getting started if it doesn't.
pixi\n
Initialize a new project and navigate to the project directory.
pixi init pixi-hello-world\ncd pixi-hello-world\n
Add the dependencies you would like to use.
pixi add python\n
Create a file named hello_world.py
in the directory and paste the following code into the file.
def hello():\n print(\"Hello World, to the new revolution in package management.\")\n\nif __name__ == \"__main__\":\n hello()\n
Run the code inside the environment.
pixi run python hello_world.py\n
You can also put this run command in a task.
pixi task add hello python hello_world.py\n
After adding the task, you can run the task using its name.
pixi run hello\n
Use the shell
command to activate the environment and start a new shell in there.
pixi shell\npython\nexit()\n
You've just learned the basic features of pixi:
Feel free to play around with what you just learned like adding more tasks, dependencies or code.
Happy coding!
"},{"location":"basic_usage/#use-pixi-as-a-global-installation-tool","title":"Use pixi as a global installation tool","text":"Use pixi to install tools on your machine.
Some notable examples:
# Awesome cross shell prompt, huge tip when using pixi!\npixi global install starship\n\n# Want to try a different shell?\npixi global install fish\n\n# Install other prefix.dev tools\npixi global install rattler-build\n\n# Install a linter you want to use in multiple projects.\npixi global install ruff\n
"},{"location":"basic_usage/#using-the-no-activation-option","title":"Using the --no-activation option","text":"When installing packages globally, you can use the --no-activation
option to prevent the insertion of environment activation code into the installed executable scripts. This means that when you run the installed executable, it won't modify the PATH
or CONDA_PREFIX
environment variables beforehand.
Example:
# Install a package without inserting activation code\npixi global install ruff --no-activation\n
This option can be useful in scenarios where you want more control over the environment activation or if you're using the installed executables in contexts where automatic activation might interfere with other processes.
"},{"location":"basic_usage/#use-pixi-in-github-actions","title":"Use pixi in GitHub Actions","text":"You can use pixi in GitHub Actions to install dependencies and run commands. It supports automatic caching of your environments.
- uses: prefix-dev/setup-pixi@v0.5.1\n- run: pixi run cowpy \"Thanks for using pixi\"\n
See the GitHub Actions for more details.
"},{"location":"vision/","title":"Vision","text":"We created pixi
because we want to have a cargo/npm/yarn like package management experience for conda. We really love what the conda packaging ecosystem achieves, but we think that the user experience can be improved a lot. Modern package managers like cargo
have shown us, how great a package manager can be. We want to bring that experience to the conda ecosystem.
We want to make pixi a great experience for everyone, so we have a few values that we want to uphold:
We are building on top of the conda packaging ecosystem, this means that we have a huge number of packages available for different platforms on conda-forge. We believe the conda packaging ecosystem provides a solid base to manage your dependencies. Conda-forge is community maintained and very open to contributions. It is widely used in data science and scientific computing, robotics and other fields. And has a proven track record.
"},{"location":"vision/#target-languages","title":"Target languages","text":"Essentially, we are language agnostics, we are targeting any language that can be installed with conda. Including: C++, Python, Rust, Zig etc. But we do believe the python ecosystem can benefit from a good package manager that is based on conda. So we are trying to provide an alternative to existing solutions there. We also think we can provide a good solution for C++ projects, as there are a lot of libraries available on conda-forge today. Pixi also truly shines when using it for multi-language projects e.g. a mix of C++ and Python, because we provide a nice way to build everything up to and including system level packages.
"},{"location":"advanced/authentication/","title":"Authenticate pixi with a server","text":"You can authenticate pixi with a server like prefix.dev, a private quetz instance or anaconda.org. Different servers use different authentication methods. In this documentation page, we detail how you can authenticate against the different servers and where the authentication information is stored.
Usage: pixi auth login [OPTIONS] <HOST>\n\nArguments:\n <HOST> The host to authenticate with (e.g. repo.prefix.dev)\n\nOptions:\n --token <TOKEN> The token to use (for authentication with prefix.dev)\n --username <USERNAME> The username to use (for basic HTTP authentication)\n --password <PASSWORD> The password to use (for basic HTTP authentication)\n --conda-token <CONDA_TOKEN> The token to use on anaconda.org / quetz authentication\n -v, --verbose... More output per occurrence\n -q, --quiet... Less output per occurrence\n -h, --help Print help\n
The different options are \"token\", \"conda-token\" and \"username + password\".
The token variant implements a standard \"Bearer Token\" authentication as is used on the prefix.dev platform. A Bearer Token is sent with every request as an additional header of the form Authentication: Bearer <TOKEN>
.
The conda-token option is used on anaconda.org and can be used with a quetz server. With this option, the token is sent as part of the URL following this scheme: conda.anaconda.org/t/<TOKEN>/conda-forge/linux-64/...
.
The last option, username & password, are used for \"Basic HTTP Authentication\". This is the equivalent of adding http://user:password@myserver.com/...
. This authentication method can be configured quite easily with a reverse NGinx or Apache server and is thus commonly used in self-hosted systems.
Login to prefix.dev:
pixi auth login prefix.dev --token pfx_jj8WDzvnuTHEGdAhwRZMC1Ag8gSto8\n
Login to anaconda.org:
pixi auth login anaconda.org --conda-token xy-72b914cc-c105-4ec7-a969-ab21d23480ed\n
Login to a basic HTTP secured server:
pixi auth login myserver.com --username user --password password\n
"},{"location":"advanced/authentication/#where-does-pixi-store-the-authentication-information","title":"Where does pixi store the authentication information?","text":"The storage location for the authentication information is system-dependent. By default, pixi tries to use the keychain to store this sensitive information securely on your machine.
On Windows, the credentials are stored in the \"credentials manager\". Searching for rattler
(the underlying library pixi uses) you should find any credentials stored by pixi (or other rattler-based programs).
On macOS, the passwords are stored in the keychain. To access the password, you can use the Keychain Access
program that comes pre-installed on macOS. Searching for rattler
(the underlying library pixi uses) you should find any credentials stored by pixi (or other rattler-based programs).
On Linux, one can use GNOME Keyring
(or just Keyring) to access credentials that are securely stored by libsecret
. Searching for rattler
should list all the credentials stored by pixi and other rattler-based programs.
If you run on a server with none of the aforementioned keychains available, then pixi falls back to store the credentials in an insecure JSON file. This JSON file is located at ~/.rattler/credentials.json
and contains the credentials.
You can use the RATTLER_AUTH_FILE
environment variable to override the default location of the credentials file. When this environment variable is set, it provides the only source of authentication data that is used by pixi.
E.g.
export RATTLER_AUTH_FILE=$HOME/credentials.json\n# You can also specify the file in the command line\npixi global install --auth-file $HOME/credentials.json ...\n
The JSON should follow the following format:
{\n \"*.prefix.dev\": {\n \"BearerToken\": \"your_token\"\n },\n \"otherhost.com\": {\n \"BasicHTTP\": {\n \"username\": \"your_username\",\n \"password\": \"your_password\"\n }\n },\n \"conda.anaconda.org\": {\n \"CondaToken\": \"your_token\"\n }\n}\n
Note: if you use a wildcard in the host, any subdomain will match (e.g. *.prefix.dev
also matches repo.prefix.dev
).
Lastly you can set the authentication override file in the global configuration file.
"},{"location":"advanced/authentication/#pypi-authentication","title":"PyPI authentication","text":"Currently, we support the following methods for authenticating against PyPI:
.netrc
file authentication.We want to add more methods in the future, so if you have a specific method you would like to see, please let us know.
"},{"location":"advanced/authentication/#keyring-authentication","title":"Keyring authentication","text":"Currently, pixi supports the uv method of authentication through the python keyring library. To enable this use the CLI flag --pypi-keyring-provider
which can either be set to subprocess
(activated) or disabled
.
# From an existing pixi project\npixi install --pypi-keyring-provider subprocess\n
This option can also be set in the global configuration file under pypi-config.
"},{"location":"advanced/authentication/#installing-keyring","title":"Installing keyring","text":"To install keyring you can use pixi global install:
Either use:
pixi global install keyring\n
GCP and other backends The downside of this method is currently, because you cannot inject into a pixi global environment just yet, that installing different keyring backends is not possible. This allows only the default keyring backend to be used. Give the issue a \ud83d\udc4d up if you would like to see inject as a feature.
Or alternatively, you can install keyring using pipx:
# Install pipx if you haven't already\npixi global install pipx\npipx install keyring\n\n# For Google Artifact Registry, also install and initialize its keyring backend.\n# Inject this into the pipx environment\npipx inject keyring keyrings.google-artifactregistry-auth --index-url https://pypi.org/simple\ngcloud auth login\n
"},{"location":"advanced/authentication/#using-keyring-with-basic-auth","title":"Using keyring with Basic Auth","text":"Use keyring to store your credentials e.g:
keyring set https://my-index/simple your_username\n# prompt will appear for your password\n
"},{"location":"advanced/authentication/#configuration","title":"Configuration","text":"Make sure to include username@
in the URL of the registry. An example of this would be:
[pypi-options]\nindex-url = \"https://username@custom-registry.com/simple\"\n
"},{"location":"advanced/authentication/#gcp","title":"GCP","text":"For Google Artifact Registry, you can use the Google Cloud SDK to authenticate. Make sure to have run gcloud auth login
before using pixi. Another thing to note is that you need to add oauth2accesstoken
to the URL of the registry. An example of this would be:
# rest of the pixi.toml\n#\n# Add's the following options to the default feature\n[pypi-options]\nextra-index-urls = [\"https://oauth2accesstoken@<location>-python.pkg.dev/<project>/<repository>/simple\"]\n
Note
Include the /simple
at the end, replace the <location>
etc. with your project and repository and location.
To find this URL more easily, you can use the gcloud
command:
gcloud artifacts print-settings python --project=<project> --repository=<repository> --location=<location>\n
"},{"location":"advanced/authentication/#azure-devops","title":"Azure DevOps","text":"Similarly for Azure DevOps, you can use the Azure keyring backend for authentication. The backend, along with installation instructions can be found at keyring.artifacts.
After following the instructions and making sure that keyring works correctly, you can use the following configuration:
"},{"location":"advanced/authentication/#configuration_2","title":"Configuration","text":"# rest of the pixi.toml\n#\n# Adds the following options to the default feature\n[pypi-options]\nextra-index-urls = [\"https://VssSessionToken@pkgs.dev.azure.com/{organization}/{project}/_packaging/{feed}/pypi/simple/\"]\n
This should allow for getting packages from the Azure DevOps artifact registry."},{"location":"advanced/authentication/#installing-your-environment","title":"Installing your environment","text":"To actually install either configure your Global Config, or use the flag:
pixi install --pypi-keyring-provider subprocess\n
"},{"location":"advanced/authentication/#netrc-file","title":".netrc
file","text":"pixi
allows you to access private registries securely by authenticating with credentials stored in a .netrc
file.
.netrc
file can be stored in your home directory ($HOME/.netrc
for Unix-like systems)%HOME%\\_netrc
).NETRC
variable (export NETRC=/my/custom/location/.netrc
). e.g export NETRC=/my/custom/location/.netrc pixi install
In the .netrc
file, you store authentication details like this:
machine registry-name\nlogin admin\npassword admin\n
For more details, you can access the .netrc docs."},{"location":"advanced/channel_priority/","title":"Channel Logic","text":"All logic regarding the decision which dependencies can be installed from which channel is done by the instruction we give the solver.
The actual code regarding this is in the rattler_solve
crate. This might however be hard to read. Therefore, this document will continue with simplified flow charts.
When a user defines a channel per dependency, the solver needs to know the other channels are unusable for this dependency.
[project]\nchannels = [\"conda-forge\", \"my-channel\"]\n\n[dependencies]\npackgex = { version = \"*\", channel = \"my-channel\" }\n
In the packagex
example, the solver will understand that the package is only available in my-channel
and will not look for it in conda-forge
. The flowchart of the logic that excludes all other channels:
flowchart TD\n A[Start] --> B[Given a Dependency]\n B --> C{Channel Specific Dependency?}\n C -->|Yes| D[Exclude All Other Channels for This Package]\n C -->|No| E{Any Other Dependencies?}\n E -->|Yes| B\n E -->|No| F[End]\n D --> E
"},{"location":"advanced/channel_priority/#channel-priority","title":"Channel priority","text":"Channel priority is dictated by the order in the project.channels
array, where the first channel is the highest priority. For instance:
[project]\nchannels = [\"conda-forge\", \"my-channel\", \"your-channel\"]\n
If the package is found in conda-forge
the solver will not look for it in my-channel
and your-channel
, because it tells the solver they are excluded. If the package is not found in conda-forge
the solver will look for it in my-channel
and if it is found there it will tell the solver to exclude your-channel
for this package. This diagram explains the logic: flowchart TD\n A[Start] --> B[Given a Dependency]\n B --> C{Loop Over Channels}\n C --> D{Package in This Channel?}\n D -->|No| C\n D -->|Yes| E{\"This the first channel\n for this package?\"}\n E -->|Yes| F[Include Package in Candidates]\n E -->|No| G[Exclude Package from Candidates]\n F --> H{Any Other Channels?}\n G --> H\n H -->|Yes| C\n H -->|No| I{Any Other Dependencies?}\n I -->|No| J[End]\n I -->|Yes| B
This method ensures the solver only adds a package to the candidates if it's found in the highest priority channel available. If you have 10 channels and the package is found in the 5th channel it will exclude the next 5 channels from the candidates if they also contain the package.
"},{"location":"advanced/channel_priority/#use-case-pytorch-and-nvidia-with-conda-forge","title":"Use case: pytorch and nvidia with conda-forge","text":"A common use case is to use pytorch
with nvidia
drivers, while also needing the conda-forge
channel for the main dependencies.
[project]\nchannels = [\"nvidia/label/cuda-11.8.0\", \"nvidia\", \"conda-forge\", \"pytorch\"]\nplatforms = [\"linux-64\"]\n\n[dependencies]\ncuda = {version = \"*\", channel=\"nvidia/label/cuda-11.8.0\"}\npytorch = {version = \"2.0.1.*\", channel=\"pytorch\"}\ntorchvision = {version = \"0.15.2.*\", channel=\"pytorch\"}\npytorch-cuda = {version = \"11.8.*\", channel=\"pytorch\"}\npython = \"3.10.*\"\n
What this will do is get as much as possible from the nvidia/label/cuda-11.8.0
channel, which is actually only the cuda
package. Then it will get all packages from the nvidia
channel, which is a little more and some packages overlap the nvidia
and conda-forge
channel. Like the cuda-cudart
package, which will now only be retrieved from the nvidia
channel because of the priority logic.
Then it will get the packages from the conda-forge
channel, which is the main channel for the dependencies.
But the user only wants the pytorch packages from the pytorch
channel, which is why pytorch
is added last and the dependencies are added as channel specific dependencies.
We don't define the pytorch
channel before conda-forge
because we want to get as much as possible from the conda-forge
as the pytorch channel is not always shipping the best versions of all packages.
For example, it also ships the ffmpeg
package, but only an old version which doesn't work with the newer pytorch versions. Thus breaking the installation if we would skip the conda-forge
channel for ffmpeg
with the priority logic.
If you want to force a specific priority for a channel, you can use the priority
(int) key in the channel definition. The higher the number, the higher the priority. Non specified priorities are set to 0 but the index in the array still counts as a priority, where the first in the list has the highest priority.
This priority definition is mostly important for multiple environments with different channel priorities, as by default feature channels are prepended to the project channels.
[project]\nname = \"test_channel_priority\"\nplatforms = [\"linux-64\", \"osx-64\", \"win-64\", \"osx-arm64\"]\nchannels = [\"conda-forge\"]\n\n[feature.a]\nchannels = [\"nvidia\"]\n\n[feature.b]\nchannels = [ \"pytorch\", {channel = \"nvidia\", priority = 1}]\n\n[feature.c]\nchannels = [ \"pytorch\", {channel = \"nvidia\", priority = -1}]\n\n[environments]\na = [\"a\"]\nb = [\"b\"]\nc = [\"c\"]\n
This example creates 4 environments, a
, b
, c
, and the default environment. Which will have the following channel order: Environment Resulting Channels order default conda-forge
a nvidia
, conda-forge
b nvidia
, pytorch
, conda-forge
c pytorch
, conda-forge
, nvidia
Check priority result with pixi info
Using pixi info
you can check the priority of the channels in the environment.
pixi info\nEnvironments\n------------\n Environment: default\n Features: default\n Channels: conda-forge\nDependency count: 0\nTarget platforms: linux-64\n\n Environment: a\n Features: a, default\n Channels: nvidia, conda-forge\nDependency count: 0\nTarget platforms: linux-64\n\n Environment: b\n Features: b, default\n Channels: nvidia, pytorch, conda-forge\nDependency count: 0\nTarget platforms: linux-64\n\n Environment: c\n Features: c, default\n Channels: pytorch, conda-forge, nvidia\nDependency count: 0\nTarget platforms: linux-64\n
"},{"location":"advanced/explain_info_command/","title":"Info command","text":"pixi info
prints out useful information to debug a situation or to get an overview of your machine/project. This information can also be retrieved in json
format using the --json
flag, which can be useful for programmatically reading it.
\u279c pixi info\n Pixi version: 0.13.0\n Platform: linux-64\n Virtual packages: __unix=0=0\n : __linux=6.5.12=0\n : __glibc=2.36=0\n : __cuda=12.3=0\n : __archspec=1=x86_64\n Cache dir: /home/user/.cache/rattler/cache\n Auth storage: /home/user/.rattler/credentials.json\n\nProject\n------------\n Version: 0.13.0\n Manifest file: /home/user/development/pixi/pixi.toml\n Last updated: 25-01-2024 10:29:08\n\nEnvironments\n------------\ndefault\n Features: default\n Channels: conda-forge\n Dependency count: 10\n Dependencies: pre-commit, rust, openssl, pkg-config, git, mkdocs, mkdocs-material, pillow, cairosvg, compilers\n Target platforms: linux-64, osx-arm64, win-64, osx-64\n Tasks: docs, test-all, test, build, lint, install, build-docs\n
"},{"location":"advanced/explain_info_command/#global-info","title":"Global info","text":"The first part of the info output is information that is always available and tells you what pixi can read on your machine.
"},{"location":"advanced/explain_info_command/#platform","title":"Platform","text":"This defines the platform you're currently on according to pixi. If this is incorrect, please file an issue on the pixi repo.
"},{"location":"advanced/explain_info_command/#virtual-packages","title":"Virtual packages","text":"The virtual packages that pixi can find on your machine.
In the Conda ecosystem, you can depend on virtual packages. These packages aren't real dependencies that are going to be installed, but rather are being used in the solve step to find if a package can be installed on the machine. A simple example: When a package depends on Cuda drivers being present on the host machine it can do that by depending on the __cuda
virtual package. In that case, if pixi cannot find the __cuda
virtual package on your machine the installation will fail.
The directory where pixi stores its cache. Checkout the cache documentation for more information.
"},{"location":"advanced/explain_info_command/#auth-storage","title":"Auth storage","text":"Check the authentication documentation
"},{"location":"advanced/explain_info_command/#cache-size","title":"Cache size","text":"[requires --extended
]
The size of the previously mentioned \"Cache dir\" in Mebibytes.
"},{"location":"advanced/explain_info_command/#project-info","title":"Project info","text":"Everything below Project
is info about the project you're currently in. This info is only available if your path has a manifest file.
The path to the manifest file that describes the project.
"},{"location":"advanced/explain_info_command/#last-updated","title":"Last updated","text":"The last time the lock file was updated, either manually or by pixi itself.
"},{"location":"advanced/explain_info_command/#environment-info","title":"Environment info","text":"The environment info defined per environment. If you don't have any environments defined, this will only show the default
environment.
This lists which features are enabled in the environment. For the default this is only default
The list of channels used in this environment.
"},{"location":"advanced/explain_info_command/#dependency-count","title":"Dependency count","text":"The amount of dependencies defined that are defined for this environment (not the amount of installed dependencies).
"},{"location":"advanced/explain_info_command/#dependencies","title":"Dependencies","text":"The list of dependencies defined for this environment.
"},{"location":"advanced/explain_info_command/#target-platforms","title":"Target platforms","text":"The platforms the project has defined.
"},{"location":"advanced/github_actions/","title":"GitHub Action","text":"We created prefix-dev/setup-pixi to facilitate using pixi in CI.
"},{"location":"advanced/github_actions/#usage","title":"Usage","text":"- uses: prefix-dev/setup-pixi@v0.8.0\n with:\n pixi-version: v0.30.0\n cache: true\n auth-host: prefix.dev\n auth-token: ${{ secrets.PREFIX_DEV_TOKEN }}\n- run: pixi run test\n
Pin your action versions
Since pixi is not yet stable, the API of this action may change between minor versions. Please pin the versions of this action to a specific version (i.e., prefix-dev/setup-pixi@v0.8.0
) to avoid breaking changes. You can automatically update the version of this action by using Dependabot.
Put the following in your .github/dependabot.yml
file to enable Dependabot for your GitHub Actions:
version: 2\nupdates:\n - package-ecosystem: github-actions\n directory: /\n schedule:\n interval: monthly # (1)!\n groups:\n dependencies:\n patterns:\n - \"*\"\n
daily
, weekly
To see all available input arguments, see the action.yml
file in setup-pixi
. The most important features are described below.
The action supports caching of the pixi environment. By default, caching is enabled if a pixi.lock
file is present. It will then use the pixi.lock
file to generate a hash of the environment and cache it. If the cache is hit, the action will skip the installation and use the cached environment. You can specify the behavior by setting the cache
input argument.
Customize your cache key
If you need to customize your cache-key, you can use the cache-key
input argument. This will be the prefix of the cache key. The full cache key will be <cache-key><conda-arch>-<hash>
.
Only save caches on main
In order to not exceed the 10 GB cache size limit as fast, you might want to restrict when the cache is saved. This can be done by setting the cache-write
argument.
- uses: prefix-dev/setup-pixi@v0.8.0\n with:\n cache: true\n cache-write: ${{ github.event_name == 'push' && github.ref_name == 'main' }}\n
"},{"location":"advanced/github_actions/#multiple-environments","title":"Multiple environments","text":"With pixi, you can create multiple environments for different requirements. You can also specify which environment(s) you want to install by setting the environments
input argument. This will install all environments that are specified and cache them.
[project]\nname = \"my-package\"\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\"]\n\n[dependencies]\npython = \">=3.11\"\npip = \"*\"\npolars = \">=0.14.24,<0.21\"\n\n[feature.py311.dependencies]\npython = \"3.11.*\"\n[feature.py312.dependencies]\npython = \"3.12.*\"\n\n[environments]\npy311 = [\"py311\"]\npy312 = [\"py312\"]\n
"},{"location":"advanced/github_actions/#multiple-environments-using-a-matrix","title":"Multiple environments using a matrix","text":"The following example will install the py311
and py312
environments in different jobs.
test:\n runs-on: ubuntu-latest\n strategy:\n matrix:\n environment: [py311, py312]\n steps:\n - uses: actions/checkout@v4\n - uses: prefix-dev/setup-pixi@v0.8.0\n with:\n environments: ${{ matrix.environment }}\n
"},{"location":"advanced/github_actions/#install-multiple-environments-in-one-job","title":"Install multiple environments in one job","text":"The following example will install both the py311
and the py312
environment on the runner.
- uses: prefix-dev/setup-pixi@v0.8.0\n with:\n environments: >- # (1)!\n py311\n py312\n- run: |\n pixi run -e py311 test\n pixi run -e py312 test\n
separated by spaces, equivalent to
environments: py311 py312\n
Caching behavior if you don't specify environments
If you don't specify any environment, the default
environment will be installed and cached, even if you use other environments.
There are currently three ways to authenticate with pixi:
For more information, see Authentication.
Handle secrets with care
Please only store sensitive information using GitHub secrets. Do not store them in your repository. When your sensitive information is stored in a GitHub secret, you can access it using the ${{ secrets.SECRET_NAME }}
syntax. These secrets will always be masked in the logs.
Specify the token using the auth-token
input argument. This form of authentication (bearer token in the request headers) is mainly used at prefix.dev.
- uses: prefix-dev/setup-pixi@v0.8.0\n with:\n auth-host: prefix.dev\n auth-token: ${{ secrets.PREFIX_DEV_TOKEN }}\n
"},{"location":"advanced/github_actions/#username-and-password","title":"Username and password","text":"Specify the username and password using the auth-username
and auth-password
input arguments. This form of authentication (HTTP Basic Auth) is used in some enterprise environments with artifactory for example.
- uses: prefix-dev/setup-pixi@v0.8.0\n with:\n auth-host: custom-artifactory.com\n auth-username: ${{ secrets.PIXI_USERNAME }}\n auth-password: ${{ secrets.PIXI_PASSWORD }}\n
"},{"location":"advanced/github_actions/#conda-token","title":"Conda-token","text":"Specify the conda-token using the conda-token
input argument. This form of authentication (token is encoded in URL: https://my-quetz-instance.com/t/<token>/get/custom-channel
) is used at anaconda.org or with quetz instances.
- uses: prefix-dev/setup-pixi@v0.8.0\n with:\n auth-host: anaconda.org # (1)!\n conda-token: ${{ secrets.CONDA_TOKEN }}\n
setup-pixi
allows you to run command inside of the pixi environment by specifying a custom shell wrapper with shell: pixi run bash -e {0}
. This can be useful if you want to run commands inside of the pixi environment, but don't want to use the pixi run
command for each command.
- run: | # (1)!\n python --version\n pip install --no-deps -e .\n shell: pixi run bash -e {0}\n
You can even run Python scripts like this:
- run: | # (1)!\n import my_package\n print(\"Hello world!\")\n shell: pixi run python {0}\n
If you want to use PowerShell, you need to specify -Command
as well.
- run: | # (1)!\n python --version | Select-String \"3.11\"\n shell: pixi run pwsh -Command {0} # pwsh works on all platforms\n
How does it work under the hood?
Under the hood, the shell: xyz {0}
option is implemented by creating a temporary script file and calling xyz
with that script file as an argument. This file does not have the executable bit set, so you cannot use shell: pixi run {0}
directly but instead have to use shell: pixi run bash {0}
. There are some custom shells provided by GitHub that have slightly different behavior, see jobs.<job_id>.steps[*].shell
in the documentation. See the official documentation and ADR 0277 for more information about how the shell:
input works in GitHub Actions.
pixi exec
","text":"With pixi exec
, you can also run a one-off command inside a temporary pixi environment.
- run: | # (1)!\n zstd --version\n shell: pixi exec --spec zstd -- bash -e {0}\n
- run: | # (1)!\n import ruamel.yaml\n # ...\n shell: pixi exec --spec python=3.11.* --spec ruamel.yaml -- python {0}\n
See here for more information about pixi exec
.
Instead of using a custom shell wrapper, you can also make all pixi-installed binaries available to subsequent steps by \"activating\" the installed environment in the currently running job. To this end, setup-pixi
adds all environment variables set when executing pixi run
to $GITHUB_ENV
and, similarly, adds all path modifications to $GITHUB_PATH
. As a result, all installed binaries can be accessed without having to call pixi run
.
- uses: prefix-dev/setup-pixi@v0.8.0\n with:\n activate-environment: true\n
If you are installing multiple environments, you will need to specify the name of the environment that you want to be activated.
- uses: prefix-dev/setup-pixi@v0.8.0\n with:\n environments: >-\n py311\n py312\n activate-environment: py311\n
Activating an environment may be more useful than using a custom shell wrapper as it allows non-shell based steps to access binaries on the path. However, be aware that this option augments the environment of your job.
"},{"location":"advanced/github_actions/#-frozen-and-locked","title":"--frozen
and --locked
","text":"You can specify whether setup-pixi
should run pixi install --frozen
or pixi install --locked
depending on the frozen
or the locked
input argument. See the official documentation for more information about the --frozen
and --locked
flags.
- uses: prefix-dev/setup-pixi@v0.8.0\n with:\n locked: true\n # or\n frozen: true\n
If you don't specify anything, the default behavior is to run pixi install --locked
if a pixi.lock
file is present and pixi install
otherwise.
There are two types of debug logging that you can enable.
"},{"location":"advanced/github_actions/#debug-logging-of-the-action","title":"Debug logging of the action","text":"The first one is the debug logging of the action itself. This can be enabled by for the action by re-running the action in debug mode:
Debug logging documentation
For more information about debug logging in GitHub Actions, see the official documentation.
"},{"location":"advanced/github_actions/#debug-logging-of-pixi","title":"Debug logging of pixi","text":"The second type is the debug logging of the pixi executable. This can be specified by setting the log-level
input.
- uses: prefix-dev/setup-pixi@v0.8.0\n with:\n log-level: vvv # (1)!\n
q
, default
, v
, vv
, or vvv
.If nothing is specified, log-level
will default to default
or vv
depending on if debug logging is enabled for the action.
On self-hosted runners, it may happen that some files are persisted between jobs. This can lead to problems or secrets getting leaked between job runs. To avoid this, you can use the post-cleanup
input to specify the post cleanup behavior of the action (i.e., what happens after all your commands have been executed).
If you set post-cleanup
to true
, the action will delete the following files:
.pixi
environment~/.rattler
If nothing is specified, post-cleanup
will default to true
.
On self-hosted runners, you also might want to alter the default pixi install location to a temporary location. You can use pixi-bin-path: ${{ runner.temp }}/bin/pixi
to do this.
- uses: prefix-dev/setup-pixi@v0.8.0\n with:\n post-cleanup: true\n pixi-bin-path: ${{ runner.temp }}/bin/pixi # (1)!\n
${{ runner.temp }}\\Scripts\\pixi.exe
on WindowsYou can also use a preinstalled local version of pixi on the runner by not setting any of the pixi-version
, pixi-url
or pixi-bin-path
inputs. This action will then try to find a local version of pixi in the runner's PATH.
pyproject.toml
as a manifest file for pixi.","text":"setup-pixi
will automatically pick up the pyproject.toml
if it contains a [tool.pixi.project]
section and no pixi.toml
. This can be overwritten by setting the manifest-path
input argument.
- uses: prefix-dev/setup-pixi@v0.8.0\n with:\n manifest-path: pyproject.toml\n
"},{"location":"advanced/github_actions/#more-examples","title":"More examples","text":"If you want to see more examples, you can take a look at the GitHub Workflows of the setup-pixi
repository.
pyproject.toml
in pixi","text":"We support the use of the pyproject.toml
as our manifest file in pixi. This allows the user to keep one file with all configuration. The pyproject.toml
file is a standard for Python projects. We don't advise to use the pyproject.toml
file for anything else than python projects, the pixi.toml
is better suited for other types of projects.
pyproject.toml
file","text":"When you already have a pyproject.toml
file in your project, you can run pixi init
in a that folder. Pixi will automatically
[tool.pixi.project]
section to the file, with the platform and channel information required by pixi;.gitignore
and .gitattributes
files.If you do not have an existing pyproject.toml
file , you can run pixi init --format pyproject
in your project folder. In that case, pixi will create a pyproject.toml
manifest from scratch with some sane defaults.
The pyproject.toml
file supports the requires_python
field. Pixi understands that field and automatically adds the version to the dependencies.
This is an example of a pyproject.toml
file with the requires_python
field, which will be used as the python dependency:
[project]\nname = \"my_project\"\nrequires-python = \">=3.9\"\n\n[tool.pixi.project]\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\", \"osx-arm64\", \"osx-64\", \"win-64\"]\n
Which is equivalent to:
equivalent pixi.toml[project]\nname = \"my_project\"\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\", \"osx-arm64\", \"osx-64\", \"win-64\"]\n\n[dependencies]\npython = \">=3.9\"\n
"},{"location":"advanced/pyproject_toml/#dependency-section","title":"Dependency section","text":"The pyproject.toml
file supports the dependencies
field. Pixi understands that field and automatically adds the dependencies to the project as [pypi-dependencies]
.
This is an example of a pyproject.toml
file with the dependencies
field:
[project]\nname = \"my_project\"\nrequires-python = \">=3.9\"\ndependencies = [\n \"numpy\",\n \"pandas\",\n \"matplotlib\",\n]\n\n[tool.pixi.project]\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\", \"osx-arm64\", \"osx-64\", \"win-64\"]\n
Which is equivalent to:
equivalent pixi.toml[project]\nname = \"my_project\"\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\", \"osx-arm64\", \"osx-64\", \"win-64\"]\n\n[pypi-dependencies]\nnumpy = \"*\"\npandas = \"*\"\nmatplotlib = \"*\"\n\n[dependencies]\npython = \">=3.9\"\n
You can overwrite these with conda dependencies by adding them to the dependencies
field:
[project]\nname = \"my_project\"\nrequires-python = \">=3.9\"\ndependencies = [\n \"numpy\",\n \"pandas\",\n \"matplotlib\",\n]\n\n[tool.pixi.project]\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\", \"osx-arm64\", \"osx-64\", \"win-64\"]\n\n[tool.pixi.dependencies]\nnumpy = \"*\"\npandas = \"*\"\nmatplotlib = \"*\"\n
This would result in the conda dependencies being installed and the pypi dependencies being ignored. As pixi takes the conda dependencies over the pypi dependencies.
"},{"location":"advanced/pyproject_toml/#optional-dependencies","title":"Optional dependencies","text":"If your python project includes groups of optional dependencies, pixi will automatically interpret them as pixi features of the same name with the associated pypi-dependencies
.
You can add them to pixi environments manually, or use pixi init
to setup the project, which will create one environment per feature. Self-references to other groups of optional dependencies are also handled.
For instance, imagine you have a project folder with a pyproject.toml
file similar to:
[project]\nname = \"my_project\"\ndependencies = [\"package1\"]\n\n[project.optional-dependencies]\ntest = [\"pytest\"]\nall = [\"package2\",\"my_project[test]\"]\n
Running pixi init
in that project folder will transform the pyproject.toml
file into:
[project]\nname = \"my_project\"\ndependencies = [\"package1\"]\n\n[project.optional-dependencies]\ntest = [\"pytest\"]\nall = [\"package2\",\"my_project[test]\"]\n\n[tool.pixi.project]\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\"] # if executed on linux\n\n[tool.pixi.environments]\ndefault = {features = [], solve-group = \"default\"}\ntest = {features = [\"test\"], solve-group = \"default\"}\nall = {features = [\"all\", \"test\"], solve-group = \"default\"}\n
In this example, three environments will be created by pixi:
All environments will be solved together, as indicated by the common solve-group
, and added to the lock file. You can edit the [tool.pixi.environments]
section manually to adapt it to your use case (e.g. if you do not need a particular environment).
As the pyproject.toml
file supports the full pixi spec with [tool.pixi]
prepended an example would look like this:
[project]\nname = \"my_project\"\nrequires-python = \">=3.9\"\ndependencies = [\n \"numpy\",\n \"pandas\",\n \"matplotlib\",\n \"ruff\",\n]\n\n[tool.pixi.project]\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\", \"osx-arm64\", \"osx-64\", \"win-64\"]\n\n[tool.pixi.dependencies]\ncompilers = \"*\"\ncmake = \"*\"\n\n[tool.pixi.tasks]\nstart = \"python my_project/main.py\"\nlint = \"ruff lint\"\n\n[tool.pixi.system-requirements]\ncuda = \"11.0\"\n\n[tool.pixi.feature.test.dependencies]\npytest = \"*\"\n\n[tool.pixi.feature.test.tasks]\ntest = \"pytest\"\n\n[tool.pixi.environments]\ntest = [\"test\"]\n
"},{"location":"advanced/pyproject_toml/#build-system-section","title":"Build-system section","text":"The pyproject.toml
file normally contains a [build-system]
section. Pixi will use this section to build and install the project if it is added as a pypi path dependency.
If the pyproject.toml
file does not contain any [build-system]
section, pixi will fall back to uv's default, which is equivalent to the below:
[build-system]\nrequires = [\"setuptools >= 40.8.0\"]\nbuild-backend = \"setuptools.build_meta:__legacy__\"\n
Including a [build-system]
section is highly recommended. If you are not sure of the build-backend you want to use, including the [build-system]
section below in your pyproject.toml
is a good starting point. pixi init --format pyproject
defaults to hatchling
. The advantages of hatchling
over setuptools
are outlined on its website.
[build-system]\nbuild-backend = \"hatchling.build\"\nrequires = [\"hatchling\"]\n
"},{"location":"advanced/updates_github_actions/","title":"Update lockfiles with GitHub Actions","text":"You can leverage GitHub Actions in combination with pavelzw/pixi-diff-to-markdown to automatically update your lockfiles similar to dependabot or renovate in other ecosystems.
Dependabot/Renovate support for pixi
You can track native Dependabot support for pixi in dependabot/dependabot-core #2227 and for Renovate in renovatebot/renovate #2213.
"},{"location":"advanced/updates_github_actions/#how-to-use","title":"How to use","text":"To get started, create a new GitHub Actions workflow file in your repository.
.github/workflows/update-lockfiles.ymlname: Update lockfiles\n\npermissions: # (1)!\n contents: write\n pull-requests: write\n\non:\n workflow_dispatch:\n schedule:\n - cron: 0 5 1 * * # (2)!\n\njobs:\n pixi-update:\n runs-on: ubuntu-latest\n steps:\n - uses: actions/checkout@v4\n - name: Set up pixi\n uses: prefix-dev/setup-pixi@v0.8.1\n with:\n run-install: false\n - name: Update lockfiles\n run: |\n set -o pipefail\n pixi update --json | pixi exec pixi-diff-to-markdown >> diff.md\n - name: Create pull request\n uses: peter-evans/create-pull-request@v6\n with:\n token: ${{ secrets.GITHUB_TOKEN }}\n commit-message: Update pixi lockfile\n title: Update pixi lockfile\n body-path: diff.md\n branch: update-pixi\n base: main\n labels: pixi\n delete-branch: true\n add-paths: pixi.lock\n
peter-evans/create-pull-request
In order for this workflow to work, you need to set \"Allow GitHub Actions to create and approve pull requests\" to true in your repository settings (in \"Actions\" -> \"General\").
Tip
If you don't have any pypi-dependencies
, you can use pixi update --json --no-install
to speed up diff generation.
"},{"location":"advanced/updates_github_actions/#triggering-ci-in-automated-prs","title":"Triggering CI in automated PRs","text":"
In order to prevent accidental recursive GitHub Workflow runs, GitHub decided to not trigger any workflows on automated PRs when using the default GITHUB_TOKEN
. There are a couple of ways how to work around this limitation. You can find excellent documentation for this in peter-evans/create-pull-request
, see here.
You can customize the summary by either using command-line-arguments of pixi-diff-to-markdown
or by specifying the configuration in pixi.toml
under [tool.pixi-diff-to-markdown]
. See the pixi-diff-to-markdown documentation or run pixi-diff-to-markdown --help
for more information.
If you want to use the same workflow in multiple repositories in your GitHub organization, you can create a reusable workflow. You can find more information in the GitHub documentation.
"},{"location":"design_proposals/pixi_global_manifest/","title":"Pixi Global Manifest","text":"Feedback wanted
This document is work in progress, and community feedback is greatly appreciated. Please share your thoughts at our GitHub discussion.
"},{"location":"design_proposals/pixi_global_manifest/#motivation","title":"Motivation","text":"pixi global
is currently limited to imperatively managing CLI packages. The next iteration of this feature should fulfill the following needs:
There are a few things we wanted to keep in mind in the design:
The global environments and exposed will be managed by a human-readable manifest. This manifest will stick to conventions set by pixi.toml
where possible. Among other things it will be written in the TOML format, be named pixi-global.toml
and be placed at ~/.pixi/manifests/pixi-global.toml
. The motivation for the location is discussed further below
# The name of the environment is `python`\n[envs.python]\nchannels = [\"conda-forge\"]\n# optional, defaults to your current OS\nplatform = \"osx-64\"\n# It will expose python, python3 and python3.11, but not pip\n[envs.python.dependencies]\npython = \"3.11.*\"\npip = \"*\"\n\n[envs.python.exposed]\npython = \"python\"\npython3 = \"python3\"\n\"python3.11\" = \"python3.11\"\n\n# The name of the environment is `python3-10`\n[envs.python3-10]\nchannels = [\"https://fast.prefix.dev/conda-forge\"]\n# It will expose python3.10\n[envs.python3-10.dependencies]\npython = \"3.10.*\"\n\n[envs.python3-10.exposed]\n\"python3.10\" = \"python\"\n
"},{"location":"design_proposals/pixi_global_manifest/#cli","title":"CLI","text":"Install one or more packages PACKAGE
and expose their executables. If --environment
has been given, all packages will be installed in the same environment. If the environment already exists, the command will return with an error. --expose
can be given if --environment
is given as well or if only a single PACKAGE
will be installed. The syntax for MAPPING
is exposed_name=executable_name
, so for example python3.10=python
. --platform
sets the platform of the environment to PLATFORM
Multiple channels can be specified by using --channel
multiple times. By default, if no channel is provided, the default-channels
key in the pixi configuration is used, which again defaults to \"conda-forge\".
pixi global install [--expose MAPPING] [--environment ENV] [--platform PLATFORM] [--channel CHANNEL]... PACKAGE...\n
Remove environments ENV
.
pixi global uninstall <ENV>...\n
Update PACKAGE
if --package
is given. If not, all packages in environments ENV
will be updated. If the update leads to executables being removed, it will offer to remove the mappings. If the user declines the update process will stop. If the update leads to executables being added, it will offer for each binary individually to expose it. --assume-yes
will assume yes as answer for every question that would otherwise be asked interactively.
pixi global update [--package PACKAGE] [--assume-yes] <ENV>...\n
Updates all packages in all environments. If the update leads to executables being removed, it will offer to remove the mappings. If the user declines the update process will stop. If the update leads to executables being added, it will offer for each binary individually to expose it. --assume-yes
will assume yes as answer for every question that would otherwise be asked interactively.
pixi global update-all [--assume-yes]\n
Add one or more packages PACKAGE
into an existing environment ENV
. If environment ENV
does not exist, it will return with an error. Without --expose
no binary will be exposed. If you don't mention a spec like python=3.8.*
, the spec will be unconstrained with *
. The syntax for MAPPING
is exposed_name=executable_name
, so for example python3.10=python
.
pixi global add --environment ENV [--expose MAPPING] <PACKAGE>...\n
Remove package PACKAGE
from environment ENV
. If that was the last package remove the whole environment and print that information in the console. If this leads to executables being removed, it will offer to remove the mappings. If the user declines the remove process will stop.
pixi global remove --environment ENV PACKAGE\n
Add one or more MAPPING
for environment ENV
which describe which executables are exposed. The syntax for MAPPING
is exposed_name=executable_name
, so for example python3.10=python
.
pixi global expose add --environment ENV <MAPPING>...\n
Remove one or more exposed BINARY
from environment ENV
pixi global expose remove --environment ENV <BINARY>...\n
Ensure that the environments on the machine reflect the state in the manifest. The manifest is the single source of truth. Only if there's no manifest, will the data from existing environments be used to create a manifest. pixi global sync
is implied by most other pixi global
commands.
pixi global sync\n
List all environments, their specs and exposed executables
pixi global list\n
Set the channels CHANNEL
for a certain environment ENV
in the pixi global manifest.
pixi global channel set --environment ENV <CHANNEL>...\n
Set the platform PLATFORM
for a certain environment ENV
in the pixi global manifest.
pixi global platform set --environment ENV PLATFORM\n
"},{"location":"design_proposals/pixi_global_manifest/#simple-workflow","title":"Simple workflow","text":"Create environment python
, install package python=3.10.*
and expose all executables of that package
pixi global install python=3.10.*\n
Update all packages in environment python
pixi global update python\n
Remove environment python
pixi global uninstall python\n
Create environment python
and pip
, install corresponding packages and expose all executables of that packages
pixi global install python pip\n
Remove environments python
and pip
pixi global uninstall python pip\n
Create environment python-pip
, install python
and pip
in the same environment and expose all executables of these packages
pixi global install --environment python-pip python pip\n
"},{"location":"design_proposals/pixi_global_manifest/#adding-dependencies","title":"Adding dependencies","text":"Create environment python
, install package python
and expose all executables of that package. Then add package hypercorn
to environment python
but doesn't expose its executables.
pixi global install python\npixi global add --environment python hypercorn\n
Update package cryptography
(a dependency of hypercorn
) to 43.0.0
in environment python
pixi update --environment python cryptography=43.0.0\n
Then remove hypercorn
again.
pixi global remove --environment python hypercorn\n
"},{"location":"design_proposals/pixi_global_manifest/#specifying-which-executables-to-expose","title":"Specifying which executables to expose","text":"Make a new environment python3-10
with package python=3.10
and expose the python
executable as python3.10
.
pixi global install --environment python3-10 --expose \"python3.10=python\" python=3.10\n
Now python3.10
is available.
Run the following in order to expose python
from environment python3-10
as python3-10
instead.
pixi global expose remove --environment python3-10 python3.10\npixi global expose add --environment python3-10 \"python3-10=python\"\n
Now python3-10
is available, but python3.10
isn't anymore.
Most pixi global
sub commands imply a pixi global sync
.
install
/remove
/inject
/other global command
.First time, clean computer. Running the following creates manifest and ~/.pixi/envs/python
.
pixi global install python\n
Delete ~/.pixi
and syncing, should add environment python
again as described in the manifest
rm `~/.pixi/envs`\npixi global sync\n
If there's no manifest, but existing environments, pixi will create a manifest that matches your current environments. It is to be decided whether the user should be asked if they want an empty manifest instead, or if it should always import the data from the environments.
rm <manifest>\npixi global sync\n
If we remove the python environment from the manifest, running pixi global sync
will also remove the ~/.pixi/envs/python
environment from the file system.
vim <manifest>\npixi global sync\n
"},{"location":"design_proposals/pixi_global_manifest/#open-questions","title":"Open Questions","text":""},{"location":"design_proposals/pixi_global_manifest/#should-we-version-the-manifest","title":"Should we version the manifest?","text":"Something like:
[manifest]\nversion = 1\n
We still have to figure out which existing programs do something similar and how they benefit from it.
"},{"location":"design_proposals/pixi_global_manifest/#multiple-manifests","title":"Multiple manifests","text":"We could go for one default manifest, but also parse other manifests in the same directory. The only requirement to be parsed as manifest is a .toml
extension In order to modify those with the CLI
one would have to add an option --manifest
to select the correct one.
It is unclear whether the first implementation already needs to support this. At the very least we should put the manifest into its own folder like ~/.pixi/global/manifests/pixi-global.toml
In order to make it easier to manage manifests in version control, we could allow to set the manifest path via a key in the pixi configuration.
config.tomlglobal_manifests = \"/path/to/your/manifests\"\n
"},{"location":"examples/cpp-sdl/","title":"SDL example","text":" The cpp-sdl
example is located in the pixi repository.
git clone https://github.com/prefix-dev/pixi.git\n
Move to the example folder
cd pixi/examples/cpp-sdl\n
Run the start
command
pixi run start\n
Using the depends-on
feature you only needed to run the start
task but under water it is running the following tasks.
# Configure the CMake project\npixi run configure\n\n# Build the executable\npixi run build\n\n# Start the build executable\npixi run start\n
"},{"location":"examples/opencv/","title":"Opencv example","text":"The opencv
example is located in the pixi repository.
git clone https://github.com/prefix-dev/pixi.git\n
Move to the example folder
cd pixi/examples/opencv\n
"},{"location":"examples/opencv/#face-detection","title":"Face detection","text":"Run the start
command to start the face detection algorithm.
pixi run start\n
The screen that starts should look like this:
Check out the webcame_capture.py
to see how we detect a face.
Next to face recognition, a camera calibration example is also included.
You'll need a checkerboard for this to work. Print this:
Then run
pixi run calibrate\n
To make a picture for calibration press SPACE
Do this approximately 10 times with the chessboard in view of the camera
After that press ESC
which will start the calibration.
When the calibration is done, the camera will be used again to find the distance to the checkerboard.
"},{"location":"examples/ros2-nav2/","title":"Navigation 2 example","text":"The nav2
example is located in the pixi repository.
git clone https://github.com/prefix-dev/pixi.git\n
Move to the example folder
cd pixi/examples/ros2-nav2\n
Run the start
command
pixi run start\n
"},{"location":"features/advanced_tasks/","title":"Advanced tasks","text":"When building a package, you often have to do more than just run the code. Steps like formatting, linting, compiling, testing, benchmarking, etc. are often part of a project. With pixi tasks, this should become much easier to do.
Here are some quick examples
pixi.toml[tasks]\n# Commands as lists so you can also add documentation in between.\nconfigure = { cmd = [\n \"cmake\",\n # Use the cross-platform Ninja generator\n \"-G\",\n \"Ninja\",\n # The source is in the root directory\n \"-S\",\n \".\",\n # We wanna build in the .build directory\n \"-B\",\n \".build\",\n] }\n\n# Depend on other tasks\nbuild = { cmd = [\"ninja\", \"-C\", \".build\"], depends-on = [\"configure\"] }\n\n# Using environment variables\nrun = \"python main.py $PIXI_PROJECT_ROOT\"\nset = \"export VAR=hello && echo $VAR\"\n\n# Cross platform file operations\ncopy = \"cp pixi.toml pixi_backup.toml\"\nclean = \"rm pixi_backup.toml\"\nmove = \"mv pixi.toml backup.toml\"\n
"},{"location":"features/advanced_tasks/#depends-on","title":"Depends on","text":"Just like packages can depend on other packages, our tasks can depend on other tasks. This allows for complete pipelines to be run with a single command.
An obvious example is compiling before running an application.
Checkout our cpp_sdl
example for a running example. In that package we have some tasks that depend on each other, so we can assure that when you run pixi run start
everything is set up as expected.
pixi task add configure \"cmake -G Ninja -S . -B .build\"\npixi task add build \"ninja -C .build\" --depends-on configure\npixi task add start \".build/bin/sdl_example\" --depends-on build\n
Results in the following lines added to the pixi.toml
[tasks]\n# Configures CMake\nconfigure = \"cmake -G Ninja -S . -B .build\"\n# Build the executable but make sure CMake is configured first.\nbuild = { cmd = \"ninja -C .build\", depends-on = [\"configure\"] }\n# Start the built executable\nstart = { cmd = \".build/bin/sdl_example\", depends-on = [\"build\"] }\n
pixi run start\n
The tasks will be executed after each other:
configure
because it has no dependencies.build
as it only depends on configure
.start
as all it dependencies are run.If one of the commands fails (exit with non-zero code.) it will stop and the next one will not be started.
With this logic, you can also create aliases as you don't have to specify any command in a task.
pixi task add fmt ruff\npixi task add lint pylint\n
pixi task alias style fmt lint\n
Results in the following pixi.toml
.
fmt = \"ruff\"\nlint = \"pylint\"\nstyle = { depends-on = [\"fmt\", \"lint\"] }\n
Now run both tools with one command.
pixi run style\n
"},{"location":"features/advanced_tasks/#working-directory","title":"Working directory","text":"Pixi tasks support the definition of a working directory.
cwd
\" stands for Current Working Directory. The directory is relative to the pixi package root, where the pixi.toml
file is located.
Consider a pixi project structured as follows:
\u251c\u2500\u2500 pixi.toml\n\u2514\u2500\u2500 scripts\n \u2514\u2500\u2500 bar.py\n
To add a task to run the bar.py
file, use:
pixi task add bar \"python bar.py\" --cwd scripts\n
This will add the following line to manifest file:
pixi.toml[tasks]\nbar = { cmd = \"python bar.py\", cwd = \"scripts\" }\n
"},{"location":"features/advanced_tasks/#caching","title":"Caching","text":"When you specify inputs
and/or outputs
to a task, pixi will reuse the result of the task.
For the cache, pixi checks that the following are true:
If all of these conditions are met, pixi will not run the task again and instead use the existing result.
Inputs and outputs can be specified as globs, which will be expanded to all matching files.
pixi.toml[tasks]\n# This task will only run if the `main.py` file has changed.\nrun = { cmd = \"python main.py\", inputs = [\"main.py\"] }\n\n# This task will remember the result of the `curl` command and not run it again if the file `data.csv` already exists.\ndownload_data = { cmd = \"curl -o data.csv https://example.com/data.csv\", outputs = [\"data.csv\"] }\n\n# This task will only run if the `src` directory has changed and will remember the result of the `make` command.\nbuild = { cmd = \"make\", inputs = [\"src/*.cpp\", \"include/*.hpp\"], outputs = [\"build/app.exe\"] }\n
Note: if you want to debug the globs you can use the --verbose
flag to see which files are selected.
# shows info logs of all files that were selected by the globs\npixi run -v start\n
"},{"location":"features/advanced_tasks/#environment-variables","title":"Environment variables","text":"You can set environment variables for a task. These are seen as \"default\" values for the variables as you can overwrite them from the shell.
pixi.toml
[tasks]\necho = { cmd = \"echo $ARGUMENT\", env = { ARGUMENT = \"hello\" } }\n
If you run pixi run echo
it will output hello
. When you set the environment variable ARGUMENT
before running the task, it will use that value instead. ARGUMENT=world pixi run echo\n\u2728 Pixi task (echo in default): echo $ARGUMENT\nworld\n
These variables are not shared over tasks, so you need to define these for every task you want to use them in.
Extend instead of overwrite
If you use the same environment variable in the value as in the key of the map you will also overwrite the variable. For example overwriting a PATH
pixi.toml
[tasks]\necho = { cmd = \"echo $PATH\", env = { PATH = \"/tmp/path:$PATH\" } }\n
This will output /tmp/path:/usr/bin:/bin
instead of the original /usr/bin:/bin
."},{"location":"features/advanced_tasks/#clean-environment","title":"Clean environment","text":"You can make sure the environment of a task is \"pixi only\". Here pixi will only include the minimal required environment variables for your platform to run the command in. The environment will contain all variables set by the conda environment like \"CONDA_PREFIX\"
. It will however include some default values from the shell, like: \"DISPLAY\"
, \"LC_ALL\"
, \"LC_TIME\"
, \"LC_NUMERIC\"
, \"LC_MEASUREMENT\"
, \"SHELL\"
, \"USER\"
, \"USERNAME\"
, \"LOGNAME\"
, \"HOME\"
, \"HOSTNAME\"
,\"TMPDIR\"
, \"XPC_SERVICE_NAME\"
, \"XPC_FLAGS\"
[tasks]\nclean_command = { cmd = \"python run_in_isolated_env.py\", clean-env = true}\n
This setting can also be set from the command line with pixi run --clean-env TASK_NAME
. clean-env
not supported on Windows
On Windows it's hard to create a \"clean environment\" as conda-forge
doesn't ship Windows compilers and Windows needs a lot of base variables. Making this feature not worthy of implementing as the amount of edge cases will make it unusable.
To support the different OS's (Windows, OSX and Linux), pixi integrates a shell that can run on all of them. This is deno_task_shell
. The task shell is a limited implementation of a bourne-shell interface.
Next to running actual executable like ./myprogram
, cmake
or python
the shell has some built-in commandos.
cp
: Copies files.mv
: Moves files.rm
: Remove files or directories. Ex: rm -rf [FILE]...
- Commonly used to recursively delete files or directories.mkdir
: Makes directories. Ex. mkdir -p DIRECTORY...
- Commonly used to make a directory and all its parents with no error if it exists.pwd
: Prints the name of the current/working directory.sleep
: Delays for a specified amount of time. Ex. sleep 1
to sleep for 1 second, sleep 0.5
to sleep for half a second, or sleep 1m
to sleep a minuteecho
: Displays a line of text.cat
: Concatenates files and outputs them on stdout. When no arguments are provided, it reads and outputs stdin.exit
: Causes the shell to exit.unset
: Unsets environment variables.xargs
: Builds arguments from stdin and executes a command.&&
or ||
to separate two commands.&&
: if the command before &&
succeeds continue with the next command.||
: if the command before ||
fails continue with the next command.;
to run two commands without checking if the first command failed or succeeded.export ENV_VAR=value
$ENV_VAR
unset ENV_VAR
VAR=value
VAR=value && echo $VAR
|
: echo Hello | python receiving_app.py
|&
: use this to also get the stderr as input.$()
to use the output of a command as input for another command.python main.py $(git rev-parse HEAD)
!
before any command will negate the exit code from 1 to 0 or visa-versa.>
to redirect the stdout to a file.echo hello > file.txt
will put hello
in file.txt
and overwrite existing text.python main.py 2> file.txt
will put the stderr
output in file.txt
.python main.py &> file.txt
will put the stderr
and stdout
in file.txt
.echo hello >> file.txt
will append hello
to the existing file.txt
.*
to expand all options.echo *.py
will echo all filenames that end with .py
echo **/*.py
will echo all filenames that end with .py
in this directory and all descendant directories.echo data[0-9].csv
will echo all filenames that have a single number after data
and before .csv
More info in deno_task_shell
documentation.
Pixi is a tool to manage virtual environments. This document explains what an environment looks like and how to use it.
"},{"location":"features/environment/#structure","title":"Structure","text":"A pixi environment is located in the .pixi/envs
directory of the project. This location is not configurable as it is a specific design decision to keep the environments in the project directory. This keeps your machine and your project clean and isolated from each other, and makes it easy to clean up after a project is done.
If you look at the .pixi/envs
directory, you will see a directory for each environment, the default
being the one that is normally used, if you specify a custom environment the name you specified will be used.
.pixi\n\u2514\u2500\u2500 envs\n \u251c\u2500\u2500 cuda\n \u2502 \u251c\u2500\u2500 bin\n \u2502 \u251c\u2500\u2500 conda-meta\n \u2502 \u251c\u2500\u2500 etc\n \u2502 \u251c\u2500\u2500 include\n \u2502 \u251c\u2500\u2500 lib\n \u2502 ...\n \u2514\u2500\u2500 default\n \u251c\u2500\u2500 bin\n \u251c\u2500\u2500 conda-meta\n \u251c\u2500\u2500 etc\n \u251c\u2500\u2500 include\n \u251c\u2500\u2500 lib\n ...\n
These directories are conda environments, and you can use them as such, but you cannot manually edit them, this should always go through the pixi.toml
. Pixi will always make sure the environment is in sync with the pixi.lock
file. If this is not the case then all the commands that use the environment will automatically update the environment, e.g. pixi run
, pixi shell
.
If you want to clean up the environments, you can simply delete the .pixi/envs
directory, and pixi will recreate the environments when needed.
# either:\nrm -rf .pixi/envs\n\n# or per environment:\nrm -rf .pixi/envs/default\nrm -rf .pixi/envs/cuda\n
"},{"location":"features/environment/#activation","title":"Activation","text":"An environment is nothing more than a set of files that are installed into a certain location, that somewhat mimics a global system install. You need to activate the environment to use it. In the most simple sense that mean adding the bin
directory of the environment to the PATH
variable. But there is more to it in a conda environment, as it also sets some environment variables.
To do the activation we have multiple options:
pixi shell
command to open a shell with the environment activated.pixi shell-hook
command to print the command to activate the environment in your current shell.pixi run
command to run a command in the environment.Where the run
command is special as it runs its own cross-platform shell and has the ability to run tasks. More information about tasks can be found in the tasks documentation.
Using the pixi shell-hook
in pixi you would get the following output:
export PATH=\"/home/user/development/pixi/.pixi/envs/default/bin:/home/user/.local/bin:/home/user/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/home/user/.pixi/bin\"\nexport CONDA_PREFIX=\"/home/user/development/pixi/.pixi/envs/default\"\nexport PIXI_PROJECT_NAME=\"pixi\"\nexport PIXI_PROJECT_ROOT=\"/home/user/development/pixi\"\nexport PIXI_PROJECT_VERSION=\"0.12.0\"\nexport PIXI_PROJECT_MANIFEST=\"/home/user/development/pixi/pixi.toml\"\nexport CONDA_DEFAULT_ENV=\"pixi\"\nexport PIXI_ENVIRONMENT_PLATFORMS=\"osx-64,linux-64,win-64,osx-arm64\"\nexport PIXI_ENVIRONMENT_NAME=\"default\"\nexport PIXI_PROMPT=\"(pixi) \"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/activate-binutils_linux-64.sh\"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/activate-gcc_linux-64.sh\"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/activate-gfortran_linux-64.sh\"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/activate-gxx_linux-64.sh\"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/libglib_activate.sh\"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/rust.sh\"\n
It sets the PATH
and some more environment variables. But more importantly it also runs activation scripts that are presented by the installed packages. An example of this would be the libglib_activate.sh
script. Thus, just adding the bin
directory to the PATH
is not enough.
conda activate
-like activation","text":"If you prefer to use the traditional conda activate
-like activation, you could use the pixi shell-hook
command.
$ which python\npython not found\n$ eval \"$(pixi shell-hook)\"\n$ (default) which python\n/path/to/project/.pixi/envs/default/bin/python\n
Warning
It is not encouraged to use the traditional conda activate
-like activation, as deactivating the environment is not really possible. Use pixi shell
instead.
pixi
with direnv
","text":"Installing direnv Of course you can use pixi
to install direnv
globally. We recommend to run
pixi global install direnv
to install the latest version of direnv
on your computer.
This allows you to use pixi
in combination with direnv
. Enter the following into your .envrc
file:
watch_file pixi.lock # (1)!\neval \"$(pixi shell-hook)\" # (2)!\n
pixi.lock
changes, direnv
invokes the shell-hook again.direnv
ensures that the environment is deactivated when you leave the directory.$ cd my-project\ndirenv: error /my-project/.envrc is blocked. Run `direnv allow` to approve its content\n$ direnv allow\ndirenv: loading /my-project/.envrc\n\u2714 Project in /my-project is ready to use!\ndirenv: export +CONDA_DEFAULT_ENV +CONDA_PREFIX +PIXI_ENVIRONMENT_NAME +PIXI_ENVIRONMENT_PLATFORMS +PIXI_PROJECT_MANIFEST +PIXI_PROJECT_NAME +PIXI_PROJECT_ROOT +PIXI_PROJECT_VERSION +PIXI_PROMPT ~PATH\n$ which python\n/my-project/.pixi/envs/default/bin/python\n$ cd ..\ndirenv: unloading\n$ which python\npython not found\n
"},{"location":"features/environment/#environment-variables","title":"Environment variables","text":"The following environment variables are set by pixi, when using the pixi run
, pixi shell
, or pixi shell-hook
command:
PIXI_PROJECT_ROOT
: The root directory of the project.PIXI_PROJECT_NAME
: The name of the project.PIXI_PROJECT_MANIFEST
: The path to the manifest file (pixi.toml
).PIXI_PROJECT_VERSION
: The version of the project.PIXI_PROMPT
: The prompt to use in the shell, also used by pixi shell
itself.PIXI_ENVIRONMENT_NAME
: The name of the environment, defaults to default
.PIXI_ENVIRONMENT_PLATFORMS
: Comma separated list of platforms supported by the project.CONDA_PREFIX
: The path to the environment. (Used by multiple tools that already understand conda environments)CONDA_DEFAULT_ENV
: The name of the environment. (Used by multiple tools that already understand conda environments)PATH
: We prepend the bin
directory of the environment to the PATH
variable, so you can use the tools installed in the environment directly.INIT_CWD
: ONLY IN pixi run
: The directory where the command was run from.Note
Even though the variables are environment variables these cannot be overridden. E.g. you can not change the root of the project by setting PIXI_PROJECT_ROOT
in the environment.
When you run a command that uses the environment, pixi will check if the environment is in sync with the pixi.lock
file. If it is not, pixi will solve the environment and update it. This means that pixi will retrieve the best set of packages for the dependency requirements that you specified in the pixi.toml
and will put the output of the solve step into the pixi.lock
file. Solving is a mathematical problem and can take some time, but we take pride in the way we solve environments, and we are confident that we can solve your environment in a reasonable time. If you want to learn more about the solving process, you can read these:
Pixi solves both the conda
and PyPI
dependencies, where the PyPI
dependencies use the conda packages as a base, so you can be sure that the packages are compatible with each other. These solvers are split between the rattler
and rip
library, these control the heavy lifting of the solving process, which is executed by our custom SAT solver: resolvo
. resolve
is able to solve multiple ecosystem like conda
and PyPI
. It implements the lazy solving process for PyPI
packages, which means that it only downloads the metadata of the packages that are needed to solve the environment. It also supports the conda
way of solving, which means that it downloads the metadata of all the packages at once and then solves in one go.
For the [pypi-dependencies]
, rip
implements sdist
building to retrieve the metadata of the packages, and wheel
building to install the packages. For this building step, pixi
requires to first install python
in the (conda)[dependencies]
section of the pixi.toml
file. This will always be slower than the pure conda solves. So for the best pixi experience you should stay within the [dependencies]
section of the pixi.toml
file.
Pixi caches all previously downloaded packages in a cache folder. This cache folder is shared between all pixi projects and globally installed tools.
Normally the location would be the following platform-specific default cache folder:
$XDG_CACHE_HOME/rattler
or $HOME/.cache/rattler
$HOME/Library/Caches/rattler
%LOCALAPPDATA%\\rattler
This location is configurable by setting the PIXI_CACHE_DIR
or RATTLER_CACHE_DIR
environment variable.
When you want to clean the cache, you can simply delete the cache directory, and pixi will re-create the cache when needed.
The cache contains multiple folders concerning different caches from within pixi.
pkgs
: Contains the downloaded/unpacked conda
packages.repodata
: Contains the conda
repodata cache.uv-cache
: Contains the uv
cache. This includes multiple caches, e.g. built-wheels
wheels
archives
http-cache
: Contains the conda-pypi
mapping cache.pixi.lock
lock file","text":"A lock file is the protector of the environments, and pixi is the key to unlock it.
"},{"location":"features/lockfile/#what-is-a-lock-file","title":"What is a lock file?","text":"A lock file locks the environment in a specific state. Within pixi a lock file is a description of the packages in an environment. The lock file contains two definitions:
The environments that are used in the project with their complete set of packages. e.g.:
environments:\n default:\n channels:\n - url: https://conda.anaconda.org/conda-forge/\n packages:\n linux-64:\n ...\n - conda: https://conda.anaconda.org/conda-forge/linux-64/python-3.12.2-hab00c5b_0_cpython.conda\n ...\n osx-64:\n ...\n - conda: https://conda.anaconda.org/conda-forge/osx-64/python-3.12.2-h9f0c242_0_cpython.conda\n ...\n
The definition of the packages themselves. e.g.:
- kind: conda\n name: python\n version: 3.12.2\n build: h9f0c242_0_cpython\n subdir: osx-64\n url: https://conda.anaconda.org/conda-forge/osx-64/python-3.12.2-h9f0c242_0_cpython.conda\n sha256: 7647ac06c3798a182a4bcb1ff58864f1ef81eb3acea6971295304c23e43252fb\n md5: 0179b8007ba008cf5bec11f3b3853902\n depends:\n - bzip2 >=1.0.8,<2.0a0\n - libexpat >=2.5.0,<3.0a0\n - libffi >=3.4,<4.0a0\n - libsqlite >=3.45.1,<4.0a0\n - libzlib >=1.2.13,<1.3.0a0\n - ncurses >=6.4,<7.0a0\n - openssl >=3.2.1,<4.0a0\n - readline >=8.2,<9.0a0\n - tk >=8.6.13,<8.7.0a0\n - tzdata\n - xz >=5.2.6,<6.0a0\n constrains:\n - python_abi 3.12.* *_cp312\n license: Python-2.0\n size: 14596811\n timestamp: 1708118065292\n
Pixi uses the lock file for the following reasons:
This gives you (and your collaborators) a way to really reproduce the environment they are working in. Using tools such as docker suddenly becomes much less necessary.
"},{"location":"features/lockfile/#when-is-a-lock-file-generated","title":"When is a lock file generated?","text":"A lock file is generated when you install a package. More specifically, a lock file is generated from the solve step of the installation process. The solve will return a list of packages that are to be installed, and the lock file will be generated from this list. This diagram tries to explain the process:
graph TD\n A[Install] --> B[Solve]\n B --> C[Generate and write lock file]\n C --> D[Install Packages]
"},{"location":"features/lockfile/#how-to-use-a-lock-file","title":"How to use a lock file","text":"Do not edit the lock file
A lock file is a machine only file, and should not be edited by hand.
That said, the pixi.lock
is human-readable, so it's easy to track the changes in the environment. We recommend you track the lock file in git
or other version control systems. This will ensure that the environment is always reproducible and that you can always revert back to a working state, in case something goes wrong. The pixi.lock
and the manifest file pixi.toml
/pyproject.toml
should always be in sync.
Running the following commands will check and automatically update the lock file if you changed any dependencies:
pixi install
pixi run
pixi shell
pixi shell-hook
pixi tree
pixi list
pixi add
pixi remove
All the commands that support the interaction with the lock file also include some lock file usage options:
--frozen
: install the environment as defined in the lock file, doesn't update pixi.lock
if it isn't up-to-date with manifest file. It can also be controlled by the PIXI_FROZEN
environment variable (example: PIXI_FROZEN=true
).--locked
: only install if the pixi.lock
is up-to-date with the manifest file[^1]. It can also be controlled by the PIXI_LOCKED
environment variable (example: PIXI_LOCKED=true
). Conflicts with --frozen
.Syncing the lock file with the manifest file
The lock file is always matched with the whole configuration in the manifest file. This means that if you change the manifest file, the lock file will be updated.
flowchart TD\n C[manifest] --> A[lockfile] --> B[environment]
"},{"location":"features/lockfile/#lockfile-satisfiability","title":"Lockfile satisfiability","text":"The lock file is a description of the environment, and it should always be satisfiable. Satisfiable means that the given manifest file and the created environment are in sync with the lockfile. If the lock file is not satisfiable, pixi will generate a new lock file automatically.
Steps to check if the lock file is satisfiable:
environments
in the manifest file are in the lock filechannels
in the manifest file are in the lock filepackages
in the manifest file are in the lock file, and the versions in the lock file are compatible with the requirements in the manifest file, for both conda
and pypi
packages.matchspec
which can match on all the information we store in the lockfile, even timestamp
, subdir
and license
.pypi-dependencies
are added, all conda
package that are python packages in the lock file have a purls
field.pypi
editable packages are correct.If you want to get more details checkout the actual code as this is a simplification of the actual code.
"},{"location":"features/lockfile/#the-version-of-the-lock-file","title":"The version of the lock file","text":"The lock file has a version number, this is to ensure that the lock file is compatible with the local version of pixi
.
version: 4\n
Pixi is backward compatible with the lock file, but not forward compatible. This means that you can use an older lock file with a newer version of pixi
, but not the other way around.
The lock file can grow quite large, especially if you have a lot of packages installed. This is because the lock file contains all the information about the packages.
If you can not think of a case where you would benefit from a fast reproducible environment, then you don't need a lock file.
But take note of the following:
If you want to remove the lock file, you can simply delete it.
rm pixi.lock\n
This will remove the lock file, and the next time you run a command that requires the lock file, it will be generated again.
Note
This does remove the locked state of the environment, and the environment will be updated to the latest version of the packages.
"},{"location":"features/multi_environment/","title":"Multi Environment Support","text":""},{"location":"features/multi_environment/#motivating-example","title":"Motivating Example","text":"There are multiple scenarios where multiple environments are useful.
py39
and py310
or polars 0.12
and 0.13
.lint
or docs
.dev
.prod
and test-prod
where test-prod
is a strict superset of prod
.cuda
environment and a cpu
environment.This prepares pixi
for use in large projects with multiple use-cases, multiple developers and different CI needs.
There are a few things we wanted to keep in mind in the design:
Introduce environment sets into the pixi.toml
this describes environments based on feature
's. Introduce features into the pixi.toml
that can describe parts of environments. As an environment goes beyond just dependencies
the features
should be described including the following fields:
dependencies
: The conda package dependenciespypi-dependencies
: The pypi package dependenciessystem-requirements
: The system requirements of the environmentactivation
: The activation information for the environmentplatforms
: The platforms the environment can be run on.channels
: The channels used to create the environment. Adding the priority
field to the channels to allow concatenation of channels instead of overwriting.target
: All the above features but also separated by targets.tasks
: Feature specific tasks, tasks in one environment are selected as default tasks for the environment.[dependencies] # short for [feature.default.dependencies]\npython = \"*\"\nnumpy = \"==2.3\"\n\n[pypi-dependencies] # short for [feature.default.pypi-dependencies]\npandas = \"*\"\n\n[system-requirements] # short for [feature.default.system-requirements]\nlibc = \"2.33\"\n\n[activation] # short for [feature.default.activation]\nscripts = [\"activate.sh\"]\n
Different dependencies per feature[feature.py39.dependencies]\npython = \"~=3.9.0\"\n[feature.py310.dependencies]\npython = \"~=3.10.0\"\n[feature.test.dependencies]\npytest = \"*\"\n
Full set of environment modification in one feature[feature.cuda]\ndependencies = {cuda = \"x.y.z\", cudnn = \"12.0\"}\npypi-dependencies = {torch = \"1.9.0\"}\nplatforms = [\"linux-64\", \"osx-arm64\"]\nactivation = {scripts = [\"cuda_activation.sh\"]}\nsystem-requirements = {cuda = \"12\"}\n# Channels concatenate using a priority instead of overwrite, so the default channels are still used.\n# Using the priority the concatenation is controlled, default is 0, the default channels are used last.\n# Highest priority comes first.\nchannels = [\"nvidia\", {channel = \"pytorch\", priority = -1}] # Results in: [\"nvidia\", \"conda-forge\", \"pytorch\"] when the default is `conda-forge`\ntasks = { warmup = \"python warmup.py\" }\ntarget.osx-arm64 = {dependencies = {mlx = \"x.y.z\"}}\n
Define tasks as defaults of an environment[feature.test.tasks]\ntest = \"pytest\"\n\n[environments]\ntest = [\"test\"]\n\n# `pixi run test` == `pixi run --environment test test`\n
The environment definition should contain the following fields:
features: Vec<Feature>
: The features that are included in the environment set, which is also the default field in the environments.solve-group: String
: The solve group is used to group environments together at the solve stage. This is useful for environments that need to have the same dependencies but might extend them with additional dependencies. For instance when testing a production environment with additional test dependencies.[environments]\n# implicit: default = [\"default\"]\ndefault = [\"py39\"] # implicit: default = [\"py39\", \"default\"]\npy310 = [\"py310\"] # implicit: py310 = [\"py310\", \"default\"]\ntest = [\"test\"] # implicit: test = [\"test\", \"default\"]\ntest39 = [\"test\", \"py39\"] # implicit: test39 = [\"test\", \"py39\", \"default\"]\n
Testing a production environment with additional dependencies[environments]\n# Creating a `prod` environment which is the minimal set of dependencies used for production.\nprod = {features = [\"py39\"], solve-group = \"prod\"}\n# Creating a `test_prod` environment which is the `prod` environment plus the `test` feature.\ntest_prod = {features = [\"py39\", \"test\"], solve-group = \"prod\"}\n# Using the `solve-group` to solve the `prod` and `test_prod` environments together\n# Which makes sure the tested environment has the same version of the dependencies as the production environment.\n
Creating environments without including the default feature[dependencies]\npython = \"*\"\nnumpy = \"*\"\n\n[feature.lint.dependencies]\npre-commit = \"*\"\n\n[environments]\n# Create a custom environment which only has the `lint` feature (numpy isn't part of that env).\nlint = {features = [\"lint\"], no-default-feature = true}\n
"},{"location":"features/multi_environment/#lock-file-structure","title":"lock file Structure","text":"Within the pixi.lock
file, a package may now include an additional environments
field, specifying the environment to which it belongs. To avoid duplication the packages environments
field may contain multiple environments so the lock file is of minimal size.
- platform: linux-64\n name: pre-commit\n version: 3.3.3\n category: main\n environments:\n - dev\n - test\n - lint\n ...:\n- platform: linux-64\n name: python\n version: 3.9.3\n category: main\n environments:\n - dev\n - test\n - lint\n - py39\n - default\n ...:\n
"},{"location":"features/multi_environment/#user-interface-environment-activation","title":"User Interface Environment Activation","text":"Users can manually activate the desired environment via command line or configuration. This approach guarantees a conflict-free environment by allowing only one feature set to be active at a time. For the user the cli would look like this:
Default behavior\u279c pixi run python\n# Runs python in the `default` environment\n
Activating an specific environment\u279c pixi run -e test pytest\n\u279c pixi run --environment test pytest\n# Runs `pytest` in the `test` environment\n
Activating a shell in an environment\u279c pixi shell -e cuda\npixi shell --environment cuda\n# Starts a shell in the `cuda` environment\n
Running any command in an environment\u279c pixi run -e test any_command\n# Runs any_command in the `test` environment which doesn't require to be predefined as a task.\n
"},{"location":"features/multi_environment/#ambiguous-environment-selection","title":"Ambiguous Environment Selection","text":"It's possible to define tasks in multiple environments, in this case the user should be prompted to select the environment.
Here is a simple example of a task only manifest:
pixi.toml
[project]\nname = \"test_ambiguous_env\"\nchannels = []\nplatforms = [\"linux-64\", \"win-64\", \"osx-64\", \"osx-arm64\"]\n\n[tasks]\ndefault = \"echo Default\"\nambi = \"echo Ambi::Default\"\n[feature.test.tasks]\ntest = \"echo Test\"\nambi = \"echo Ambi::Test\"\n\n[feature.dev.tasks]\ndev = \"echo Dev\"\nambi = \"echo Ambi::Dev\"\n\n[environments]\ndefault = [\"test\", \"dev\"]\ntest = [\"test\"]\ndev = [\"dev\"]\n
Trying to run the abmi
task will prompt the user to select the environment. As it is available in all environments. Interactive selection of environments if task is in multiple environments\u279c pixi run ambi\n? The task 'ambi' can be run in multiple environments.\n\nPlease select an environment to run the task in: \u203a\n\u276f default # selecting default\n test\n dev\n\n\u2728 Pixi task (ambi in default): echo Ambi::Test\nAmbi::Test\n
As you can see it runs the task defined in the feature.task
but it is run in the default
environment. This happens because the ambi
task is defined in the test
feature, and it is overwritten in the default environment. So the tasks.default
is now non-reachable from any environment.
Some other results running in this example:
\u279c pixi run --environment test ambi\n\u2728 Pixi task (ambi in test): echo Ambi::Test\nAmbi::Test\n\n\u279c pixi run --environment dev ambi\n\u2728 Pixi task (ambi in dev): echo Ambi::Dev\nAmbi::Dev\n\n# dev is run in the default environment\n\u279c pixi run dev\n\u2728 Pixi task (dev in default): echo Dev\nDev\n\n# dev is run in the dev environment\n\u279c pixi run -e dev dev\n\u2728 Pixi task (dev in dev): echo Dev\nDev\n
"},{"location":"features/multi_environment/#important-links","title":"Important links","text":"In polarify
they want to test multiple versions combined with multiple versions of polars. This is currently done by using a matrix in GitHub actions. This can be replaced by using multiple environments.
[project]\nname = \"polarify\"\n# ...\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\", \"osx-arm64\", \"osx-64\", \"win-64\"]\n\n[tasks]\npostinstall = \"pip install --no-build-isolation --no-deps --disable-pip-version-check -e .\"\n\n[dependencies]\npython = \">=3.9\"\npip = \"*\"\npolars = \">=0.14.24,<0.21\"\n\n[feature.py39.dependencies]\npython = \"3.9.*\"\n[feature.py310.dependencies]\npython = \"3.10.*\"\n[feature.py311.dependencies]\npython = \"3.11.*\"\n[feature.py312.dependencies]\npython = \"3.12.*\"\n[feature.pl017.dependencies]\npolars = \"0.17.*\"\n[feature.pl018.dependencies]\npolars = \"0.18.*\"\n[feature.pl019.dependencies]\npolars = \"0.19.*\"\n[feature.pl020.dependencies]\npolars = \"0.20.*\"\n\n[feature.test.dependencies]\npytest = \"*\"\npytest-md = \"*\"\npytest-emoji = \"*\"\nhypothesis = \"*\"\n[feature.test.tasks]\ntest = \"pytest\"\n\n[feature.lint.dependencies]\npre-commit = \"*\"\n[feature.lint.tasks]\nlint = \"pre-commit run --all\"\n\n[environments]\npl017 = [\"pl017\", \"py39\", \"test\"]\npl018 = [\"pl018\", \"py39\", \"test\"]\npl019 = [\"pl019\", \"py39\", \"test\"]\npl020 = [\"pl020\", \"py39\", \"test\"]\npy39 = [\"py39\", \"test\"]\npy310 = [\"py310\", \"test\"]\npy311 = [\"py311\", \"test\"]\npy312 = [\"py312\", \"test\"]\n
.github/workflows/test.ymljobs:\n tests-per-env:\n runs-on: ubuntu-latest\n strategy:\n matrix:\n environment: [py311, py312]\n steps:\n - uses: actions/checkout@v4\n - uses: prefix-dev/setup-pixi@v0.5.1\n with:\n environments: ${{ matrix.environment }}\n - name: Run tasks\n run: |\n pixi run --environment ${{ matrix.environment }} test\n tests-with-multiple-envs:\n runs-on: ubuntu-latest\n steps:\n - uses: actions/checkout@v4\n - uses: prefix-dev/setup-pixi@v0.5.1\n with:\n environments: pl017 pl018\n - run: |\n pixi run -e pl017 test\n pixi run -e pl018 test\n
Test vs Production example This is an example of a project that has a test
feature and prod
environment. The prod
environment is a production environment that contains the run dependencies. The test
feature is a set of dependencies and tasks that we want to put on top of the previously solved prod
environment. This is a common use case where we want to test the production environment with additional dependencies.
pixi.toml
[project]\nname = \"my-app\"\n# ...\nchannels = [\"conda-forge\"]\nplatforms = [\"osx-arm64\", \"linux-64\"]\n\n[tasks]\npostinstall-e = \"pip install --no-build-isolation --no-deps --disable-pip-version-check -e .\"\npostinstall = \"pip install --no-build-isolation --no-deps --disable-pip-version-check .\"\ndev = \"uvicorn my_app.app:main --reload\"\nserve = \"uvicorn my_app.app:main\"\n\n[dependencies]\npython = \">=3.12\"\npip = \"*\"\npydantic = \">=2\"\nfastapi = \">=0.105.0\"\nsqlalchemy = \">=2,<3\"\nuvicorn = \"*\"\naiofiles = \"*\"\n\n[feature.test.dependencies]\npytest = \"*\"\npytest-md = \"*\"\npytest-asyncio = \"*\"\n[feature.test.tasks]\ntest = \"pytest --md=report.md\"\n\n[environments]\n# both default and prod will have exactly the same dependency versions when they share a dependency\ndefault = {features = [\"test\"], solve-group = \"prod-group\"}\nprod = {features = [], solve-group = \"prod-group\"}\n
In ci you would run the following commands: pixi run postinstall-e && pixi run test\n
Locally you would run the following command: pixi run postinstall-e && pixi run dev\n
Then in a Dockerfile you would run the following command: Dockerfile
FROM ghcr.io/prefix-dev/pixi:latest # this doesn't exist yet\nWORKDIR /app\nCOPY . .\nRUN pixi run --environment prod postinstall\nEXPOSE 8080\nCMD [\"/usr/local/bin/pixi\", \"run\", \"--environment\", \"prod\", \"serve\"]\n
Multiple machines from one project This is an example for an ML project that should be executable on a machine that supports cuda
and mlx
. It should also be executable on machines that don't support cuda
or mlx
, we use the cpu
feature for this.
[project]\nname = \"my-ml-project\"\ndescription = \"A project that does ML stuff\"\nauthors = [\"Your Name <your.name@gmail.com>\"]\nchannels = [\"conda-forge\", \"pytorch\"]\n# All platforms that are supported by the project as the features will take the intersection of the platforms defined there.\nplatforms = [\"win-64\", \"linux-64\", \"osx-64\", \"osx-arm64\"]\n\n[tasks]\ntrain-model = \"python train.py\"\nevaluate-model = \"python test.py\"\n\n[dependencies]\npython = \"3.11.*\"\npytorch = {version = \">=2.0.1\", channel = \"pytorch\"}\ntorchvision = {version = \">=0.15\", channel = \"pytorch\"}\npolars = \">=0.20,<0.21\"\nmatplotlib-base = \">=3.8.2,<3.9\"\nipykernel = \">=6.28.0,<6.29\"\n\n[feature.cuda]\nplatforms = [\"win-64\", \"linux-64\"]\nchannels = [\"nvidia\", {channel = \"pytorch\", priority = -1}]\nsystem-requirements = {cuda = \"12.1\"}\n\n[feature.cuda.tasks]\ntrain-model = \"python train.py --cuda\"\nevaluate-model = \"python test.py --cuda\"\n\n[feature.cuda.dependencies]\npytorch-cuda = {version = \"12.1.*\", channel = \"pytorch\"}\n\n[feature.mlx]\nplatforms = [\"osx-arm64\"]\n# MLX is only available on macOS >=13.5 (>14.0 is recommended)\nsystem-requirements = {macos = \"13.5\"}\n\n[feature.mlx.tasks]\ntrain-model = \"python train.py --mlx\"\nevaluate-model = \"python test.py --mlx\"\n\n[feature.mlx.dependencies]\nmlx = \">=0.16.0,<0.17.0\"\n\n[feature.cpu]\nplatforms = [\"win-64\", \"linux-64\", \"osx-64\", \"osx-arm64\"]\n\n[environments]\ncuda = [\"cuda\"]\nmlx = [\"mlx\"]\ndefault = [\"cpu\"]\n
Running the project on a cuda machinepixi run train-model --environment cuda\n# will execute `python train.py --cuda`\n# fails if not on linux-64 or win-64 with cuda 12.1\n
Running the project with mlxpixi run train-model --environment mlx\n# will execute `python train.py --mlx`\n# fails if not on osx-arm64\n
Running the project on a machine without cuda or mlxpixi run train-model\n
"},{"location":"features/multi_platform_configuration/","title":"Multi platform config","text":"Pixi's vision includes being supported on all major platforms. Sometimes that needs some extra configuration to work well. On this page, you will learn what you can configure to align better with the platform you are making your application for.
Here is an example manifest file that highlights some of the features:
pixi.toml
pyproject.toml
pixi.toml[project]\n# Default project info....\n# A list of platforms you are supporting with your package.\nplatforms = [\"win-64\", \"linux-64\", \"osx-64\", \"osx-arm64\"]\n\n[dependencies]\npython = \">=3.8\"\n\n[target.win-64.dependencies]\n# Overwrite the needed python version only on win-64\npython = \"3.7\"\n\n\n[activation]\nscripts = [\"setup.sh\"]\n\n[target.win-64.activation]\n# Overwrite activation scripts only for windows\nscripts = [\"setup.bat\"]\n
pyproject.toml[tool.pixi.project]\n# Default project info....\n# A list of platforms you are supporting with your package.\nplatforms = [\"win-64\", \"linux-64\", \"osx-64\", \"osx-arm64\"]\n\n[tool.pixi.dependencies]\npython = \">=3.8\"\n\n[tool.pixi.target.win-64.dependencies]\n# Overwrite the needed python version only on win-64\npython = \"~=3.7.0\"\n\n\n[tool.pixi.activation]\nscripts = [\"setup.sh\"]\n\n[tool.pixi.target.win-64.activation]\n# Overwrite activation scripts only for windows\nscripts = [\"setup.bat\"]\n
"},{"location":"features/multi_platform_configuration/#platform-definition","title":"Platform definition","text":"The project.platforms
defines which platforms your project supports. When multiple platforms are defined, pixi determines which dependencies to install for each platform individually. All of this is stored in a lock file.
Running pixi install
on a platform that is not configured will warn the user that it is not setup for that platform:
\u276f pixi install\n \u00d7 the project is not configured for your current platform\n \u256d\u2500[pixi.toml:6:1]\n 6 \u2502 channels = [\"conda-forge\"]\n 7 \u2502 platforms = [\"osx-64\", \"osx-arm64\", \"win-64\"]\n \u00b7 \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u00b7 \u2570\u2500\u2500 add 'linux-64' here\n 8 \u2502\n \u2570\u2500\u2500\u2500\u2500\n help: The project needs to be configured to support your platform (linux-64).\n
"},{"location":"features/multi_platform_configuration/#target-specifier","title":"Target specifier","text":"With the target specifier, you can overwrite the original configuration specifically for a single platform. If you are targeting a specific platform in your target specifier that was not specified in your project.platforms
then pixi will throw an error.
It might happen that you want to install a certain dependency only on a specific platform, or you might want to use a different version on different platforms.
pixi.toml[dependencies]\npython = \">=3.8\"\n\n[target.win-64.dependencies]\nmsmpi = \"*\"\npython = \"3.8\"\n
In the above example, we specify that we depend on msmpi
only on Windows. We also specifically want python
on 3.8
when installing on Windows. This will overwrite the dependencies from the generic set of dependencies. This will not touch any of the other platforms.
You can use pixi's cli to add these dependencies to the manifest file.
pixi add --platform win-64 posix\n
This also works for the host
and build
dependencies.
pixi add --host --platform win-64 posix\npixi add --build --platform osx-64 clang\n
Which results in this.
pixi.toml[target.win-64.host-dependencies]\nposix = \"1.0.0.*\"\n\n[target.osx-64.build-dependencies]\nclang = \"16.0.6.*\"\n
"},{"location":"features/multi_platform_configuration/#activation","title":"Activation","text":"Pixi's vision is to enable completely cross-platform projects, but you often need to run tools that are not built by your projects. Generated activation scripts are often in this category, default scripts in unix are bash
and for windows they are bat
To deal with this, you can define your activation scripts using the target definition.
pixi.toml
[activation]\nscripts = [\"setup.sh\", \"local_setup.bash\"]\n\n[target.win-64.activation]\nscripts = [\"setup.bat\", \"local_setup.bat\"]\n
When this project is run on win-64
it will only execute the target scripts not the scripts specified in the default activation.scripts
"},{"location":"features/system_requirements/","title":"System Requirements in pixi","text":"System requirements define the minimal system specifications necessary during dependency resolution for a project. For instance, specifying a Unix system with a particular minimal libc
version ensures that dependencies are compatible with the project's environment.
System specifications are closely related to virtual packages, allowing for flexible and accurate dependency management.
"},{"location":"features/system_requirements/#default-system-requirements","title":"Default System Requirements","text":"The following configurations outline the default minimal system requirements for different operating systems:
LinuxWindowsosx-64osx-arm64# Default system requirements for Linux\n[system-requirements]\nlinux = \"4.18\"\nlibc = { family = \"glibc\", version = \"2.28\" }\n
Windows currently has no minimal system requirements defined. If your project requires specific Windows configurations, you should define them accordingly.
# Default system requirements for macOS\n[system-requirements]\nmacos = \"13.0\"\n
# Default system requirements for macOS ARM64\n[system-requirements]\nmacos = \"13.0\"\n
"},{"location":"features/system_requirements/#customizing-system-requirements","title":"Customizing System Requirements","text":"You only need to define system requirements if your project necessitates a different set from the defaults. This is common when installing environments on older or newer versions of operating systems.
"},{"location":"features/system_requirements/#adjusting-for-older-systems","title":"Adjusting for Older Systems","text":"If you're encountering an error like:
\u00d7 The current system has a mismatching virtual package. The project requires '__linux' to be at least version '4.18' but the system has version '4.12.14'\n
This indicates that the project's system requirements are higher than your current system's specifications. To resolve this, you can lower the system requirements in your project's configuration:
[system-requirements]\nlinux = \"4.12.14\"\n
This adjustment informs the dependency resolver to accommodate the older system version.
"},{"location":"features/system_requirements/#using-cuda-in-pixi","title":"Using CUDA in pixi","text":"To utilize CUDA in your project, you must specify the desired CUDA version in the system-requirements table. This ensures that CUDA is recognized and appropriately locked into the lock file if necessary.
Example Configuration
[system-requirements]\ncuda = \"12\" # Replace \"12\" with the specific CUDA version you intend to use\n
"},{"location":"features/system_requirements/#setting-system-requirements-environment-specific","title":"Setting System Requirements environment specific","text":"This can be set per feature
in the the manifest
file.
[feature.cuda.system-requirements]\ncuda = \"12\"\n\n[environments]\ncuda = [\"cuda\"]\n
"},{"location":"features/system_requirements/#available-override-options","title":"Available Override Options","text":"In certain scenarios, you might need to override the system requirements detected on your machine. This can be particularly useful when working on systems that do not meet the project's default requirements.
You can override virtual packages by setting the following environment variables:
CONDA_OVERRIDE_CUDA
- Description: Sets the CUDA version. - Usage Example: CONDA_OVERRIDE_CUDA=11
CONDA_OVERRIDE_GLIBC
- Description: Sets the glibc version. - Usage Example: CONDA_OVERRIDE_GLIBC=2.28
CONDA_OVERRIDE_OSX
- Description: Sets the macOS version. - Usage Example: CONDA_OVERRIDE_OSX=13.0
For more detailed information on managing virtual packages
and overriding system requirements, refer to the Conda Documentation.
Using JupyterLab with pixi is very simple. You can just create a new pixi project and add the jupyterlab
package to it. The full example is provided under the following Github link.
pixi init\npixi add jupyterlab\n
This will create a new pixi project and add the jupyterlab
package to it. You can then start JupyterLab using the following command:
pixi run jupyter lab\n
If you want to add more \"kernels\" to JupyterLab, you can simply add them to your current project \u2013 as well as any dependencies from the scientific stack you might need.
pixi add bash_kernel ipywidgets matplotlib numpy pandas # ...\n
"},{"location":"ide_integration/jupyterlab/#what-kernels-are-available","title":"What kernels are available?","text":"You can easily install more \"kernels\" for JupyterLab. The conda-forge
repository has a number of interesting additional kernels - not just Python!
bash_kernel
A kernel for bashxeus-cpp
A C++ kernel based on the new clang-replxeus-cling
A C++ kernel based on the slightly older Clingxeus-lua
A Lua kernelxeus-sql
A kernel for SQLr-irkernel
An R kernelIf you want to have only one instance of JupyterLab running but still want per-directory Pixi environments, you can use one of the kernels provided by the pixi-kernel
package.
To get started, create a Pixi project, add jupyterlab
and pixi-kernel
and then start JupyterLab:
pixi init\npixi add jupyterlab pixi-kernel\npixi run jupyter lab\n
This will start JupyterLab and open it in your browser.
pixi-kernel
searches for a manifest file, either pixi.toml
or pyproject.toml
, in the same directory of your notebook or in any parent directory. When it finds one, it will use the environment specified in the manifest file to start the kernel and run your notebooks.
If you just want to check a JupyterLab environment running in the cloud using pixi-kernel
, you can visit Binder.
You can use PyCharm with pixi environments by using the conda
shim provided by the pixi-pycharm package.
To get started, add pixi-pycharm
to your pixi project.
pixi add pixi-pycharm\n
This will ensure that the conda shim is installed in your project's environment.
Having pixi-pycharm
installed, you can now configure PyCharm to use your pixi environments. Go to the Add Python Interpreter dialog (bottom right corner of the PyCharm window) and select Conda Environment. Set Conda Executable to the full path of the conda
file (on Windows: conda.bat
) which is located in .pixi/envs/default/libexec
. You can get the path using the following command:
pixi run 'echo $CONDA_PREFIX/libexec/conda'\n
pixi run 'echo $CONDA_PREFIX\\\\libexec\\\\conda.bat'\n
This is an executable that tricks PyCharm into thinking it's the proper conda
executable. Under the hood it redirects all calls to the corresponding pixi
equivalent.
Use the conda shim from this pixi project
Please make sure that this is the conda
shim from this pixi project and not another one. If you use multiple pixi projects, you might have to adjust the path accordingly as PyCharm remembers the path to the conda executable.
Having selected the environment, PyCharm will now use the Python interpreter from your pixi environment.
PyCharm should now be able to show you the installed packages as well.
You can now run your programs and tests as usual.
Mark .pixi
as excluded
In order for PyCharm to not get confused about the .pixi
directory, please mark it as excluded.
Also, when using a remote interpreter, you should exclude the .pixi
directory on the remote machine. Instead, you should run pixi install
on the remote machine and select the conda shim from there.
If your project uses multiple environments to tests different Python versions or dependencies, you can add multiple environments to PyCharm by specifying Use existing environment in the Add Python Interpreter dialog.
You can then specify the corresponding environment in the bottom right corner of the PyCharm window.
"},{"location":"ide_integration/pycharm/#multiple-pixi-projects","title":"Multiple pixi projects","text":"
When using multiple pixi projects, remember to select the correct Conda Executable for each project as mentioned above. It also might come up that you have multiple environments it might come up that you have multiple environments with the same name.
It is recommended to rename the environments to something unique.
"},{"location":"ide_integration/pycharm/#debugging","title":"Debugging","text":"Logs are written to ~/.cache/pixi-pycharm.log
. You can use them to debug problems. Please attach the logs when filing a bug report.
You can use pixi
to manage your R dependencies. The conda-forge channel contains a wide range of R packages that can be installed using pixi
.
R packages are usually prefixed with r-
in the conda-forge channel. To install an R package, you can use the following command:
pixi add r-<package-name>\n# for example\npixi add r-ggplot2\n
"},{"location":"ide_integration/r_studio/#using-r-packages-in-rstudio","title":"Using R packages in RStudio","text":"To use the R packages installed by pixi
in RStudio, you need to run rstudio
from an activated environment. This can be achieved by running RStudio from pixi shell
or from a task in the pixi.toml
file.
The full example can be found here: RStudio example. Here is an example of a pixi.toml
file that sets up an RStudio task:
[project]\nname = \"r\"\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\", \"osx-64\", \"osx-arm64\"]\n\n[target.linux.tasks]\nrstudio = \"rstudio\"\n\n[target.osx.tasks]\nrstudio = \"open -a rstudio\"\n# or alternatively with the full path:\n# rstudio = \"/Applications/RStudio.app/Contents/MacOS/RStudio\"\n\n[dependencies]\nr = \">=4.3,<5\"\nr-ggplot2 = \">=3.5.0,<3.6\"\n
Once RStudio has loaded, you can execute the following R code that uses the ggplot2
package:
# Load the ggplot2 package\nlibrary(ggplot2)\n\n# Load the built-in 'mtcars' dataset\ndata <- mtcars\n\n# Create a scatterplot of 'mpg' vs 'wt'\nggplot(data, aes(x = wt, y = mpg)) +\n geom_point() +\n labs(x = \"Weight (1000 lbs)\", y = \"Miles per Gallon\") +\n ggtitle(\"Fuel Efficiency vs. Weight\")\n
Note
This example assumes that you have installed RStudio system-wide. We are working on updating RStudio as well as the R interpreter builds on Windows for maximum compatibility with pixi
.
--verbose (-v|vv|vvv)
Increase the verbosity of the output messages, the -v|vv|vvv increases the level of verbosity respectively.--help (-h)
Shows help information, use -h
to get the short version of the help.--version (-V)
: shows the version of pixi that is used.--quiet (-q)
: Decreases the amount of output.--color <COLOR>
: Whether the log needs to be colored [env: PIXI_COLOR=
] [default: auto
] [possible values: always
, never
, auto
]. Pixi also honors the FORCE_COLOR
and NO_COLOR
environment variables. They both take precedence over --color
and PIXI_COLOR
.--no-progress
: Disables the progress bar.[env: PIXI_NO_PROGRESS
] [default: false
]init
","text":"This command is used to create a new project. It initializes a pixi.toml
file and also prepares a .gitignore
to prevent the environment from being added to git
.
It also supports the pyproject.toml
file, if you have a pyproject.toml
file in the directory where you run pixi init
, it appends the pixi data to the pyproject.toml
instead of a new pixi.toml
file.
[PATH]
: Where to place the project (defaults to current path) [default: .
]--channel <CHANNEL> (-c)
: specify a channel that the project uses. Defaults to conda-forge
. (Allowed to be used more than once)--platform <PLATFORM> (-p)
: specify a platform that the project supports. (Allowed to be used more than once)--import <ENV_FILE> (-i)
: Import an existing conda environment file, e.g. environment.yml
.--format <FORMAT>
: Specify the format of the project file, either pyproject
or pixi
. [default: pixi
]Importing an environment.yml
When importing an environment, the pixi.toml
will be created with the dependencies from the environment file. The pixi.lock
will be created when you install the environment. We don't support git+
urls as dependencies for pip packages and for the defaults
channel we use main
, r
and msys2
as the default channels.
pixi init myproject\npixi init ~/myproject\npixi init # Initializes directly in the current directory.\npixi init --channel conda-forge --channel bioconda myproject\npixi init --platform osx-64 --platform linux-64 myproject\npixi init --import environment.yml\npixi init --format pyproject\npixi init --format pixi\n
"},{"location":"reference/cli/#add","title":"add
","text":"Adds dependencies to the manifest file. It will only add if the package with its version constraint is able to work with rest of the dependencies in the project. More info on multi-platform configuration.
If the project manifest is a pyproject.toml
, adding a pypi dependency will add it to the native pyproject project.dependencies
array, or to the native project.optional-dependencies
table if a feature is specified:
pixi add --pypi boto3
would add boto3
to the project.dependencies
arraypixi add --pypi boto3 --feature aws
would add boto3
to the project.dependencies.aws
arrayThese dependencies will be read by pixi as if they had been added to the pixi pypi-dependencies
tables of the default or a named feature.
[SPECS]
: The package(s) to add, space separated. The version constraint is optional.--manifest-path <MANIFEST_PATH>
: the path to manifest file, by default it searches for one in the parent directories.--host
: Specifies a host dependency, important for building a package.--build
: Specifies a build dependency, important for building a package.--pypi
: Specifies a PyPI dependency, not a conda package. Parses dependencies as PEP508 requirements, supporting extras and versions. See configuration for details.--no-install
: Don't install the package to the environment, only add the package to the lock-file.--no-lockfile-update
: Don't update the lock-file, implies the --no-install
flag.--platform <PLATFORM> (-p)
: The platform for which the dependency should be added. (Allowed to be used more than once)--feature <FEATURE> (-f)
: The feature for which the dependency should be added.--editable
: Specifies an editable dependency, only use in combination with --pypi
.pixi add numpy # (1)!\npixi add numpy pandas \"pytorch>=1.8\" # (2)!\npixi add \"numpy>=1.22,<1.24\" # (3)!\npixi add --manifest-path ~/myproject/pixi.toml numpy # (4)!\npixi add --host \"python>=3.9.0\" # (5)!\npixi add --build cmake # (6)!\npixi add --platform osx-64 clang # (7)!\npixi add --no-install numpy # (8)!\npixi add --no-lockfile-update numpy # (9)!\npixi add --feature featurex numpy # (10)!\n\n# Add a pypi dependency\npixi add --pypi requests[security] # (11)!\npixi add --pypi Django==5.1rc1 # (12)!\npixi add --pypi \"boltons>=24.0.0\" --feature lint # (13)!\npixi add --pypi \"boltons @ https://files.pythonhosted.org/packages/46/35/e50d4a115f93e2a3fbf52438435bb2efcf14c11d4fcd6bdcd77a6fc399c9/boltons-24.0.0-py3-none-any.whl\" # (14)!\npixi add --pypi \"exchangelib @ git+https://github.com/ecederstrand/exchangelib\" # (15)!\npixi add --pypi \"project @ file:///absolute/path/to/project\" # (16)!\npixi add --pypi \"project@file:///absolute/path/to/project\" --editable # (17)!\n
numpy
package to the project with the latest available for the solved environment.numpy
package with the version constraint.numpy
package to the project of the manifest file at the given path.python
package as a host dependency. There is currently no different behavior for host dependencies.cmake
package as a build dependency. There is currently no different behavior for build dependencies.clang
package only for the osx-64
platform.numpy
package to the manifest and lockfile, without installing it in an environment.numpy
package to the manifest without updating the lockfile or installing it in the environment.numpy
package in the feature featurex
.requests
package as pypi
dependency with the security
extra.pre-release
version of Django
to the project as a pypi
dependency.boltons
package in the feature lint
as pypi
dependency.boltons
package with the given url
as pypi
dependency.exchangelib
package with the given git
url as pypi
dependency.project
package with the given file
url as pypi
dependency.project
package with the given file
url as an editable
package as pypi
dependency.Tip
If you want to use a non default pinning strategy, you can set it using pixi's configuration.
pixi config set pinning-strategy no-pin --global\n
The default is semver
which will pin the dependencies to the latest major version or minor for v0
versions."},{"location":"reference/cli/#install","title":"install
","text":"Installs an environment based on the manifest file. If there is no pixi.lock
file or it is not up-to-date with the manifest file, it will (re-)generate the lock file.
pixi install
only installs one environment at a time, if you have multiple environments you can select the right one with the --environment
flag. If you don't provide an environment, the default
environment will be installed.
Running pixi install
is not required before running other commands. As all commands interacting with the environment will first run the install
command if the environment is not ready, to make sure you always run in a correct state. E.g. pixi run
, pixi shell
, pixi shell-hook
, pixi add
, pixi remove
to name a few.
--manifest-path <MANIFEST_PATH>
: the path to manifest file, by default it searches for one in the parent directories.--frozen
: install the environment as defined in the lock file, doesn't update pixi.lock
if it isn't up-to-date with manifest file. It can also be controlled by the PIXI_FROZEN
environment variable (example: PIXI_FROZEN=true
).--locked
: only install if the pixi.lock
is up-to-date with the manifest file1. It can also be controlled by the PIXI_LOCKED
environment variable (example: PIXI_LOCKED=true
). Conflicts with --frozen
.--environment <ENVIRONMENT> (-e)
: The environment to install, if none are provided the default environment will be used.pixi install\npixi install --manifest-path ~/myproject/pixi.toml\npixi install --frozen\npixi install --locked\npixi install --environment lint\npixi install -e lint\n
"},{"location":"reference/cli/#update","title":"update
","text":"The update
command checks if there are newer versions of the dependencies and updates the pixi.lock
file and environments accordingly. It will only update the lock file if the dependencies in the manifest file are still compatible with the new versions.
[PACKAGES]...
The packages to update, space separated. If no packages are provided, all packages will be updated.--manifest-path <MANIFEST_PATH>
: the path to manifest file, by default it searches for one in the parent directories.--environment <ENVIRONMENT> (-e)
: The environment to install, if none are provided all the environments are updated.--platform <PLATFORM> (-p)
: The platform for which the dependencies should be updated.--dry-run (-n)
: Only show the changes that would be made, without actually updating the lock file or environment.--no-install
: Don't install the (solve) environment needed for solving pypi-dependencies.--json
: Output the changes in json format.pixi update numpy\npixi update numpy pandas\npixi update --manifest-path ~/myproject/pixi.toml numpy\npixi update --environment lint python\npixi update -e lint -e schema -e docs pre-commit\npixi update --platform osx-arm64 mlx\npixi update -p linux-64 -p osx-64 numpy\npixi update --dry-run\npixi update --no-install boto3\n
"},{"location":"reference/cli/#run","title":"run
","text":"The run
commands first checks if the environment is ready to use. When you didn't run pixi install
the run command will do that for you. The custom tasks defined in the manifest file are also available through the run command.
You cannot run pixi run source setup.bash
as source
is not available in the deno_task_shell
commandos and not an executable.
[TASK]...
The task you want to run in the projects environment, this can also be a normal command. And all arguments after the task will be passed to the task.--manifest-path <MANIFEST_PATH>
: the path to manifest file, by default it searches for one in the parent directories.--frozen
: install the environment as defined in the lock file, doesn't update pixi.lock
if it isn't up-to-date with manifest file. It can also be controlled by the PIXI_FROZEN
environment variable (example: PIXI_FROZEN=true
).--locked
: only install if the pixi.lock
is up-to-date with the manifest file1. It can also be controlled by the PIXI_LOCKED
environment variable (example: PIXI_LOCKED=true
). Conflicts with --frozen
.--environment <ENVIRONMENT> (-e)
: The environment to run the task in, if none are provided the default environment will be used or a selector will be given to select the right environment.--clean-env
: Run the task in a clean environment, this will remove all environment variables of the shell environment except for the ones pixi sets. THIS DOESN't WORK ON Windows
. pixi run python\npixi run cowpy \"Hey pixi user\"\npixi run --manifest-path ~/myproject/pixi.toml python\npixi run --frozen python\npixi run --locked python\n# If you have specified a custom task in the pixi.toml you can run it with run as well\npixi run build\n# Extra arguments will be passed to the tasks command.\npixi run task argument1 argument2\n\n# If you have multiple environments you can select the right one with the --environment flag.\npixi run --environment cuda python\n\n# THIS DOESN'T WORK ON WINDOWS\n# If you want to run a command in a clean environment you can use the --clean-env flag.\n# The PATH should only contain the pixi environment here.\npixi run --clean-env \"echo \\$PATH\"\n
Info
In pixi
the deno_task_shell
is the underlying runner of the run command. Checkout their documentation for the syntax and available commands. This is done so that the run commands can be run across all platforms.
Cross environment tasks
If you're using the depends-on
feature of the tasks
, the tasks will be run in the order you specified them. The depends-on
can be used cross environment, e.g. you have this pixi.toml
:
[tasks]\nstart = { cmd = \"python start.py\", depends-on = [\"build\"] }\n\n[feature.build.tasks]\nbuild = \"cargo build\"\n[feature.build.dependencies]\nrust = \">=1.74\"\n\n[environments]\nbuild = [\"build\"]\n
Then you're able to run the build
from the build
environment and start
from the default environment. By only calling:
pixi run start\n
"},{"location":"reference/cli/#exec","title":"exec
","text":"Runs a command in a temporary environment disconnected from any project. This can be useful to quickly test out a certain package or version.
Temporary environments are cached. If the same command is run again, the same environment will be reused.
Cleaning temporary environmentsCurrently, temporary environments can only be cleaned up manually. Environments for pixi exec
are stored under cached-envs-v0/
in the cache directory. Run pixi info
to find the cache directory.
<COMMAND>
: The command to run.--spec <SPECS> (-s)
: Matchspecs of packages to install. If this is not provided, the package is guessed from the command.--channel <CHANNELS> (-c)
: The channel to install the packages from. If not specified the default channel is used.--force-reinstall
If specified a new environment is always created even if one already exists.pixi exec python\n\n# Add a constraint to the python version\npixi exec -s python=3.9 python\n\n# Run ipython and include the py-rattler package in the environment\npixi exec -s ipython -s py-rattler ipython\n\n# Force reinstall to recreate the environment and get the latest package versions\npixi exec --force-reinstall -s ipython -s py-rattler ipython\n
"},{"location":"reference/cli/#remove","title":"remove
","text":"Removes dependencies from the manifest file.
If the project manifest is a pyproject.toml
, removing a pypi dependency with the --pypi
flag will remove it from either - the native pyproject project.dependencies
array or the native project.optional-dependencies
table (if a feature is specified) - pixi pypi-dependencies
tables of the default or a named feature (if a feature is specified)
<DEPS>...
: List of dependencies you wish to remove from the project.--manifest-path <MANIFEST_PATH>
: the path to manifest file, by default it searches for one in the parent directories.--host
: Specifies a host dependency, important for building a package.--build
: Specifies a build dependency, important for building a package.--pypi
: Specifies a PyPI dependency, not a conda package.--platform <PLATFORM> (-p)
: The platform from which the dependency should be removed.--feature <FEATURE> (-f)
: The feature from which the dependency should be removed.--no-install
: Don't install the environment, only remove the package from the lock-file and manifest.--no-lockfile-update
: Don't update the lock-file, implies the --no-install
flag.pixi remove numpy\npixi remove numpy pandas pytorch\npixi remove --manifest-path ~/myproject/pixi.toml numpy\npixi remove --host python\npixi remove --build cmake\npixi remove --pypi requests\npixi remove --platform osx-64 --build clang\npixi remove --feature featurex clang\npixi remove --feature featurex --platform osx-64 clang\npixi remove --feature featurex --platform osx-64 --build clang\npixi remove --no-install numpy\n
"},{"location":"reference/cli/#task","title":"task
","text":"If you want to make a shorthand for a specific command you can add a task for it.
"},{"location":"reference/cli/#options_7","title":"Options","text":"--manifest-path <MANIFEST_PATH>
: the path to manifest file, by default it searches for one in the parent directories.task add
","text":"Add a task to the manifest file, use --depends-on
to add tasks you want to run before this task, e.g. build before an execute task.
<NAME>
: The name of the task.<COMMAND>
: The command to run. This can be more than one word.Info
If you are using $
for env variables they will be resolved before adding them to the task. If you want to use $
in the task you need to escape it with a \\
, e.g. echo \\$HOME
.
--platform <PLATFORM> (-p)
: the platform for which this task should be added.--feature <FEATURE> (-f)
: the feature for which the task is added, if non provided the default tasks will be added.--depends-on <DEPENDS_ON>
: the task it depends on to be run before the one your adding.--cwd <CWD>
: the working directory for the task relative to the root of the project.--env <ENV>
: the environment variables as key=value
pairs for the task, can be used multiple times, e.g. --env \"VAR1=VALUE1\" --env \"VAR2=VALUE2\"
.--description <DESCRIPTION>
: a description of the task.pixi task add cow cowpy \"Hello User\"\npixi task add tls ls --cwd tests\npixi task add test cargo t --depends-on build\npixi task add build-osx \"METAL=1 cargo build\" --platform osx-64\npixi task add train python train.py --feature cuda\npixi task add publish-pypi \"hatch publish --yes --repo main\" --feature build --env HATCH_CONFIG=config/hatch.toml --description \"Publish the package to pypi\"\n
This adds the following to the manifest file:
[tasks]\ncow = \"cowpy \\\"Hello User\\\"\"\ntls = { cmd = \"ls\", cwd = \"tests\" }\ntest = { cmd = \"cargo t\", depends-on = [\"build\"] }\n\n[target.osx-64.tasks]\nbuild-osx = \"METAL=1 cargo build\"\n\n[feature.cuda.tasks]\ntrain = \"python train.py\"\n\n[feature.build.tasks]\npublish-pypi = { cmd = \"hatch publish --yes --repo main\", env = { HATCH_CONFIG = \"config/hatch.toml\" }, description = \"Publish the package to pypi\" }\n
Which you can then run with the run
command:
pixi run cow\n# Extra arguments will be passed to the tasks command.\npixi run test --test test1\n
"},{"location":"reference/cli/#task-remove","title":"task remove
","text":"Remove the task from the manifest file
"},{"location":"reference/cli/#arguments_7","title":"Arguments","text":"<NAMES>
: The names of the tasks, space separated.--platform <PLATFORM> (-p)
: the platform for which this task is removed.--feature <FEATURE> (-f)
: the feature for which the task is removed.pixi task remove cow\npixi task remove --platform linux-64 test\npixi task remove --feature cuda task\n
"},{"location":"reference/cli/#task-alias","title":"task alias
","text":"Create an alias for a task.
"},{"location":"reference/cli/#arguments_8","title":"Arguments","text":"<ALIAS>
: The alias name<DEPENDS_ON>
: The names of the tasks you want to execute on this alias, order counts, first one runs first.--platform <PLATFORM> (-p)
: the platform for which this alias is created.pixi task alias test-all test-py test-cpp test-rust\npixi task alias --platform linux-64 test test-linux\npixi task alias moo cow\n
"},{"location":"reference/cli/#task-list","title":"task list
","text":"List all tasks in the project.
"},{"location":"reference/cli/#options_11","title":"Options","text":"--environment
(-e
): the environment's tasks list, if non is provided the default tasks will be listed.--summary
(-s
): list the tasks per environment.pixi task list\npixi task list --environment cuda\npixi task list --summary\n
"},{"location":"reference/cli/#list","title":"list
","text":"List project's packages. Highlighted packages are explicit dependencies.
"},{"location":"reference/cli/#options_12","title":"Options","text":"--platform <PLATFORM> (-p)
: The platform to list packages for. Defaults to the current platform--json
: Whether to output in json format.--json-pretty
: Whether to output in pretty json format--sort-by <SORT_BY>
: Sorting strategy [default: name] [possible values: size, name, type]--explicit (-x)
: Only list the packages that are explicitly added to the manifest file.--manifest-path <MANIFEST_PATH>
: The path to manifest file, by default it searches for one in the parent directories.--environment (-e)
: The environment's packages to list, if non is provided the default environment's packages will be listed.--frozen
: install the environment as defined in the lock file, doesn't update pixi.lock
if it isn't up-to-date with manifest file. It can also be controlled by the PIXI_FROZEN
environment variable (example: PIXI_FROZEN=true
).--locked
: Only install if the pixi.lock
is up-to-date with the manifest file1. It can also be controlled by the PIXI_LOCKED
environment variable (example: PIXI_LOCKED=true
). Conflicts with --frozen
.--no-install
: Don't install the environment for pypi solving, only update the lock-file if it can solve without installing. (Implied by --frozen
and --locked
)pixi list\npixi list --json-pretty\npixi list --explicit\npixi list --sort-by size\npixi list --platform win-64\npixi list --environment cuda\npixi list --frozen\npixi list --locked\npixi list --no-install\n
Output will look like this, where python
will be green as it is the package that was explicitly added to the manifest file:
\u279c pixi list\n Package Version Build Size Kind Source\n _libgcc_mutex 0.1 conda_forge 2.5 KiB conda _libgcc_mutex-0.1-conda_forge.tar.bz2\n _openmp_mutex 4.5 2_gnu 23.1 KiB conda _openmp_mutex-4.5-2_gnu.tar.bz2\n bzip2 1.0.8 hd590300_5 248.3 KiB conda bzip2-1.0.8-hd590300_5.conda\n ca-certificates 2023.11.17 hbcca054_0 150.5 KiB conda ca-certificates-2023.11.17-hbcca054_0.conda\n ld_impl_linux-64 2.40 h41732ed_0 688.2 KiB conda ld_impl_linux-64-2.40-h41732ed_0.conda\n libexpat 2.5.0 hcb278e6_1 76.2 KiB conda libexpat-2.5.0-hcb278e6_1.conda\n libffi 3.4.2 h7f98852_5 56.9 KiB conda libffi-3.4.2-h7f98852_5.tar.bz2\n libgcc-ng 13.2.0 h807b86a_4 755.7 KiB conda libgcc-ng-13.2.0-h807b86a_4.conda\n libgomp 13.2.0 h807b86a_4 412.2 KiB conda libgomp-13.2.0-h807b86a_4.conda\n libnsl 2.0.1 hd590300_0 32.6 KiB conda libnsl-2.0.1-hd590300_0.conda\n libsqlite 3.44.2 h2797004_0 826 KiB conda libsqlite-3.44.2-h2797004_0.conda\n libuuid 2.38.1 h0b41bf4_0 32.8 KiB conda libuuid-2.38.1-h0b41bf4_0.conda\n libxcrypt 4.4.36 hd590300_1 98 KiB conda libxcrypt-4.4.36-hd590300_1.conda\n libzlib 1.2.13 hd590300_5 60.1 KiB conda libzlib-1.2.13-hd590300_5.conda\n ncurses 6.4 h59595ed_2 863.7 KiB conda ncurses-6.4-h59595ed_2.conda\n openssl 3.2.0 hd590300_1 2.7 MiB conda openssl-3.2.0-hd590300_1.conda\n python 3.12.1 hab00c5b_1_cpython 30.8 MiB conda python-3.12.1-hab00c5b_1_cpython.conda\n readline 8.2 h8228510_1 274.9 KiB conda readline-8.2-h8228510_1.conda\n tk 8.6.13 noxft_h4845f30_101 3.2 MiB conda tk-8.6.13-noxft_h4845f30_101.conda\n tzdata 2023d h0c530f3_0 116.8 KiB conda tzdata-2023d-h0c530f3_0.conda\n xz 5.2.6 h166bdaf_0 408.6 KiB conda xz-5.2.6-h166bdaf_0.tar.bz2\n
"},{"location":"reference/cli/#tree","title":"tree
","text":"Display the project's packages in a tree. Highlighted packages are those specified in the manifest.
The package tree can also be inverted (-i
), to see which packages require a specific dependencies.
REGEX
optional regex of which dependencies to filter the tree to, or which dependencies to start with when inverting the tree.--invert (-i)
: Invert the dependency tree, that is given a REGEX
pattern that matches some packages, show all the packages that depend on those.--platform <PLATFORM> (-p)
: The platform to list packages for. Defaults to the current platform--manifest-path <MANIFEST_PATH>
: The path to manifest file, by default it searches for one in the parent directories.--environment (-e)
: The environment's packages to list, if non is provided the default environment's packages will be listed.--frozen
: install the environment as defined in the lock file, doesn't update pixi.lock
if it isn't up-to-date with manifest file. It can also be controlled by the PIXI_FROZEN
environment variable (example: PIXI_FROZEN=true
).--locked
: Only install if the pixi.lock
is up-to-date with the manifest file1. It can also be controlled by the PIXI_LOCKED
environment variable (example: PIXI_LOCKED=true
). Conflicts with --frozen
.--no-install
: Don't install the environment for pypi solving, only update the lock-file if it can solve without installing. (Implied by --frozen
and --locked
)pixi tree\npixi tree pre-commit\npixi tree -i yaml\npixi tree --environment docs\npixi tree --platform win-64\n
Warning
Use -v
to show which pypi
packages are not yet parsed correctly. The extras
and markers
parsing is still under development.
Output will look like this, where direct packages in the manifest file will be green. Once a package has been displayed once, the tree won't continue to recurse through its dependencies (compare the first time python
appears, vs the rest), and it will instead be marked with a star (*)
.
Version numbers are colored by the package type, yellow for Conda packages and blue for PyPI.
\u279c pixi tree\n\u251c\u2500\u2500 pre-commit v3.3.3\n\u2502 \u251c\u2500\u2500 cfgv v3.3.1\n\u2502 \u2502 \u2514\u2500\u2500 python v3.12.2\n\u2502 \u2502 \u251c\u2500\u2500 bzip2 v1.0.8\n\u2502 \u2502 \u251c\u2500\u2500 libexpat v2.6.2\n\u2502 \u2502 \u251c\u2500\u2500 libffi v3.4.2\n\u2502 \u2502 \u251c\u2500\u2500 libsqlite v3.45.2\n\u2502 \u2502 \u2502 \u2514\u2500\u2500 libzlib v1.2.13\n\u2502 \u2502 \u251c\u2500\u2500 libzlib v1.2.13 (*)\n\u2502 \u2502 \u251c\u2500\u2500 ncurses v6.4.20240210\n\u2502 \u2502 \u251c\u2500\u2500 openssl v3.2.1\n\u2502 \u2502 \u251c\u2500\u2500 readline v8.2\n\u2502 \u2502 \u2502 \u2514\u2500\u2500 ncurses v6.4.20240210 (*)\n\u2502 \u2502 \u251c\u2500\u2500 tk v8.6.13\n\u2502 \u2502 \u2502 \u2514\u2500\u2500 libzlib v1.2.13 (*)\n\u2502 \u2502 \u2514\u2500\u2500 xz v5.2.6\n\u2502 \u251c\u2500\u2500 identify v2.5.35\n\u2502 \u2502 \u2514\u2500\u2500 python v3.12.2 (*)\n...\n\u2514\u2500\u2500 tbump v6.9.0\n...\n \u2514\u2500\u2500 tomlkit v0.12.4\n \u2514\u2500\u2500 python v3.12.2 (*)\n
A regex pattern can be specified to filter the tree to just those that show a specific direct, or transitive dependency:
\u279c pixi tree pre-commit\n\u2514\u2500\u2500 pre-commit v3.3.3\n \u251c\u2500\u2500 virtualenv v20.25.1\n \u2502 \u251c\u2500\u2500 filelock v3.13.1\n \u2502 \u2502 \u2514\u2500\u2500 python v3.12.2\n \u2502 \u2502 \u251c\u2500\u2500 libexpat v2.6.2\n \u2502 \u2502 \u251c\u2500\u2500 readline v8.2\n \u2502 \u2502 \u2502 \u2514\u2500\u2500 ncurses v6.4.20240210\n \u2502 \u2502 \u251c\u2500\u2500 libsqlite v3.45.2\n \u2502 \u2502 \u2502 \u2514\u2500\u2500 libzlib v1.2.13\n \u2502 \u2502 \u251c\u2500\u2500 bzip2 v1.0.8\n \u2502 \u2502 \u251c\u2500\u2500 libzlib v1.2.13 (*)\n \u2502 \u2502 \u251c\u2500\u2500 libffi v3.4.2\n \u2502 \u2502 \u251c\u2500\u2500 tk v8.6.13\n \u2502 \u2502 \u2502 \u2514\u2500\u2500 libzlib v1.2.13 (*)\n \u2502 \u2502 \u251c\u2500\u2500 xz v5.2.6\n \u2502 \u2502 \u251c\u2500\u2500 ncurses v6.4.20240210 (*)\n \u2502 \u2502 \u2514\u2500\u2500 openssl v3.2.1\n \u2502 \u251c\u2500\u2500 platformdirs v4.2.0\n \u2502 \u2502 \u2514\u2500\u2500 python v3.12.2 (*)\n \u2502 \u251c\u2500\u2500 distlib v0.3.8\n \u2502 \u2502 \u2514\u2500\u2500 python v3.12.2 (*)\n \u2502 \u2514\u2500\u2500 python v3.12.2 (*)\n \u251c\u2500\u2500 pyyaml v6.0.1\n...\n
Additionally, the tree can be inverted, and it can show which packages depend on a regex pattern. The packages specified in the manifest will also be highlighted (in this case cffconvert
and pre-commit
would be).
\u279c pixi tree -i yaml\n\nruamel.yaml v0.18.6\n\u251c\u2500\u2500 pykwalify v1.8.0\n\u2502 \u2514\u2500\u2500 cffconvert v2.0.0\n\u2514\u2500\u2500 cffconvert v2.0.0\n\npyyaml v6.0.1\n\u2514\u2500\u2500 pre-commit v3.3.3\n\nruamel.yaml.clib v0.2.8\n\u2514\u2500\u2500 ruamel.yaml v0.18.6\n \u251c\u2500\u2500 pykwalify v1.8.0\n \u2502 \u2514\u2500\u2500 cffconvert v2.0.0\n \u2514\u2500\u2500 cffconvert v2.0.0\n\nyaml v0.2.5\n\u2514\u2500\u2500 pyyaml v6.0.1\n \u2514\u2500\u2500 pre-commit v3.3.3\n
"},{"location":"reference/cli/#shell","title":"shell
","text":"This command starts a new shell in the project's environment. To exit the pixi shell, simply run exit
.
--change-ps1 <true or false>
: When set to false, the (pixi)
prefix in the shell prompt is removed (default: true
). The default behavior can be configured globally.--manifest-path <MANIFEST_PATH>
: the path to manifest file, by default it searches for one in the parent directories.--frozen
: install the environment as defined in the lock file, doesn't update pixi.lock
if it isn't up-to-date with manifest file. It can also be controlled by the PIXI_FROZEN
environment variable (example: PIXI_FROZEN=true
).--locked
: only install if the pixi.lock
is up-to-date with the manifest file1. It can also be controlled by the PIXI_LOCKED
environment variable (example: PIXI_LOCKED=true
). Conflicts with --frozen
.--environment <ENVIRONMENT> (-e)
: The environment to activate the shell in, if none are provided the default environment will be used or a selector will be given to select the right environment.pixi shell\nexit\npixi shell --manifest-path ~/myproject/pixi.toml\nexit\npixi shell --frozen\nexit\npixi shell --locked\nexit\npixi shell --environment cuda\nexit\n
"},{"location":"reference/cli/#shell-hook","title":"shell-hook
","text":"This command prints the activation script of an environment.
"},{"location":"reference/cli/#options_15","title":"Options","text":"--shell <SHELL> (-s)
: The shell for which the activation script should be printed. Defaults to the current shell. Currently supported variants: [bash
, zsh
, xonsh
, cmd
, powershell
, fish
, nushell
]--manifest-path
: the path to manifest file, by default it searches for one in the parent directories.--frozen
: install the environment as defined in the lock file, doesn't update pixi.lock
if it isn't up-to-date with manifest file. It can also be controlled by the PIXI_FROZEN
environment variable (example: PIXI_FROZEN=true
).--locked
: only install if the pixi.lock
is up-to-date with the manifest file1. It can also be controlled by the PIXI_LOCKED
environment variable (example: PIXI_LOCKED=true
). Conflicts with --frozen
.--environment <ENVIRONMENT> (-e)
: The environment to activate, if none are provided the default environment will be used or a selector will be given to select the right environment.--json
: Print all environment variables that are exported by running the activation script as JSON. When specifying this option, --shell
is ignored.pixi shell-hook\npixi shell-hook --shell bash\npixi shell-hook --shell zsh\npixi shell-hook -s powershell\npixi shell-hook --manifest-path ~/myproject/pixi.toml\npixi shell-hook --frozen\npixi shell-hook --locked\npixi shell-hook --environment cuda\npixi shell-hook --json\n
Example use-case, when you want to get rid of the pixi
executable in a Docker container.
pixi shell-hook --shell bash > /etc/profile.d/pixi.sh\nrm ~/.pixi/bin/pixi # Now the environment will be activated without the need for the pixi executable.\n
"},{"location":"reference/cli/#search","title":"search
","text":"Search a package, output will list the latest version of the package.
"},{"location":"reference/cli/#arguments_10","title":"Arguments","text":"<PACKAGE>
: Name of package to search, it's possible to use wildcards (*
).--manifest-path <MANIFEST_PATH>
: the path to manifest file, by default it searches for one in the parent directories.--channel <CHANNEL> (-c)
: specify a channel that the project uses. Defaults to conda-forge
. (Allowed to be used more than once)--limit <LIMIT> (-l)
: optionally limit the number of search results--platform <PLATFORM> (-p)
: specify a platform that you want to search for. (default: current platform)pixi search pixi\npixi search --limit 30 \"py*\"\n# search in a different channel and for a specific platform\npixi search -c robostack --platform linux-64 \"plotjuggler*\"\n
"},{"location":"reference/cli/#self-update","title":"self-update
","text":"Update pixi to the latest version or a specific version. If the pixi binary is not found in the default location (e.g. ~/.pixi/bin/pixi
), pixi won't update to prevent breaking the current installation (Homebrew, etc.). The behaviour can be overridden with the --force
flag
--version <VERSION>
: The desired version (to downgrade or upgrade to). Update to the latest version if not specified.--force
: Force the update even if the pixi binary is not found in the default location.pixi self-update\npixi self-update --version 0.13.0\npixi self-update --force\n
"},{"location":"reference/cli/#info","title":"info
","text":"Shows helpful information about the pixi installation, cache directories, disk usage, and more. More information here.
"},{"location":"reference/cli/#options_18","title":"Options","text":"--manifest-path <MANIFEST_PATH>
: the path to manifest file, by default it searches for one in the parent directories.--extended
: extend the information with more slow queries to the system, like directory sizes.--json
: Get a machine-readable version of the information as output.pixi info\npixi info --json --extended\n
"},{"location":"reference/cli/#clean","title":"clean
","text":"Clean the parts of your system which are touched by pixi. Defaults to cleaning the environments and task cache. Use the cache
subcommand to clean the cache
--manifest-path <MANIFEST_PATH>
: the path to manifest file, by default it searches for one in the parent directories.--environment <ENVIRONMENT> (-e)
: The environment to clean, if none are provided all environments will be removed.pixi clean\n
"},{"location":"reference/cli/#clean-cache","title":"clean cache
","text":"Clean the pixi cache on your system.
"},{"location":"reference/cli/#options_20","title":"Options","text":"--pypi
: Clean the pypi cache.--conda
: Clean the conda cache.--yes
: Skip the confirmation prompt.pixi clean cache # clean all pixi caches\npixi clean cache --pypi # clean only the pypi cache\npixi clean cache --conda # clean only the conda cache\npixi clean cache --yes # skip the confirmation prompt\n
"},{"location":"reference/cli/#upload","title":"upload
","text":"Upload a package to a prefix.dev channel
"},{"location":"reference/cli/#arguments_11","title":"Arguments","text":"<HOST>
: The host + channel to upload to.<PACKAGE_FILE>
: The package file to upload.pixi upload https://prefix.dev/api/v1/upload/my_channel my_package.conda\n
"},{"location":"reference/cli/#auth","title":"auth
","text":"This command is used to authenticate the user's access to remote hosts such as prefix.dev
or anaconda.org
for private channels.
auth login
","text":"Store authentication information for given host.
Tip
The host is real hostname not a channel.
"},{"location":"reference/cli/#arguments_12","title":"Arguments","text":"<HOST>
: The host to authenticate with.--token <TOKEN>
: The token to use for authentication with prefix.dev.--username <USERNAME>
: The username to use for basic HTTP authentication--password <PASSWORD>
: The password to use for basic HTTP authentication.--conda-token <CONDA_TOKEN>
: The token to use on anaconda.org
/ quetz
authentication.pixi auth login repo.prefix.dev --token pfx_JQEV-m_2bdz-D8NSyRSaAndHANx0qHjq7f2iD\npixi auth login anaconda.org --conda-token ABCDEFGHIJKLMNOP\npixi auth login https://myquetz.server --username john --password xxxxxx\n
"},{"location":"reference/cli/#auth-logout","title":"auth logout
","text":"Remove authentication information for a given host.
"},{"location":"reference/cli/#arguments_13","title":"Arguments","text":"<HOST>
: The host to authenticate with.pixi auth logout <HOST>\npixi auth logout repo.prefix.dev\npixi auth logout anaconda.org\n
"},{"location":"reference/cli/#config","title":"config
","text":"Use this command to manage the configuration.
"},{"location":"reference/cli/#options_22","title":"Options","text":"--system (-s)
: Specify management scope to system configuration.--global (-g)
: Specify management scope to global configuration.--local (-l)
: Specify management scope to local configuration.Checkout the pixi configuration for more information about the locations.
"},{"location":"reference/cli/#config-edit","title":"config edit
","text":"Edit the configuration file in the default editor.
pixi config edit --system\npixi config edit --local\npixi config edit -g\n
"},{"location":"reference/cli/#config-list","title":"config list
","text":"List the configuration
"},{"location":"reference/cli/#arguments_14","title":"Arguments","text":"[KEY]
: The key to list the value of. (all if not provided)--json
: Output the configuration in JSON format.pixi config list default-channels\npixi config list --json\npixi config list --system\npixi config list -g\n
"},{"location":"reference/cli/#config-prepend","title":"config prepend
","text":"Prepend a value to a list configuration key.
"},{"location":"reference/cli/#arguments_15","title":"Arguments","text":"<KEY>
: The key to prepend the value to.<VALUE>
: The value to prepend.pixi config prepend default-channels conda-forge\n
"},{"location":"reference/cli/#config-append","title":"config append
","text":"Append a value to a list configuration key.
"},{"location":"reference/cli/#arguments_16","title":"Arguments","text":"<KEY>
: The key to append the value to.<VALUE>
: The value to append.pixi config append default-channels robostack\npixi config append default-channels bioconda --global\n
"},{"location":"reference/cli/#config-set","title":"config set
","text":"Set a configuration key to a value.
"},{"location":"reference/cli/#arguments_17","title":"Arguments","text":"<KEY>
: The key to set the value of.[VALUE]
: The value to set. (if not provided, the key will be removed)pixi config set default-channels '[\"conda-forge\", \"bioconda\"]'\npixi config set --global mirrors '{\"https://conda.anaconda.org/\": [\"https://prefix.dev/conda-forge\"]}'\npixi config set repodata-config.disable-zstd true --system\npixi config set --global detached-environments \"/opt/pixi/envs\"\npixi config set detached-environments false\n
"},{"location":"reference/cli/#config-unset","title":"config unset
","text":"Unset a configuration key.
"},{"location":"reference/cli/#arguments_18","title":"Arguments","text":"<KEY>
: The key to unset.pixi config unset default-channels\npixi config unset --global mirrors\npixi config unset repodata-config.disable-zstd --system\n
"},{"location":"reference/cli/#global","title":"global
","text":"Global is the main entry point for the part of pixi that executes on the global(system) level.
Tip
Binaries and environments installed globally are stored in ~/.pixi
by default, this can be changed by setting the PIXI_HOME
environment variable.
global install
","text":"This command installs package(s) into its own environment and adds the binary to PATH
, allowing you to access it anywhere on your system without activating the environment.
1.<PACKAGE>
: The package(s) to install, this can also be a version constraint.
--channel <CHANNEL> (-c)
: specify a channel that the project uses. Defaults to conda-forge
. (Allowed to be used more than once)--platform <PLATFORM> (-p)
: specify a platform that you want to install the package for. (default: current platform)--no-activation
: Do not insert conda_prefix, path modifications, and activation script into the installed executable script.pixi global install ruff\n# multiple packages can be installed at once\npixi global install starship rattler-build\n# specify the channel(s)\npixi global install --channel conda-forge --channel bioconda trackplot\n# Or in a more concise form\npixi global install -c conda-forge -c bioconda trackplot\n\n# Support full conda matchspec\npixi global install python=3.9.*\npixi global install \"python [version='3.11.0', build_number=1]\"\npixi global install \"python [version='3.11.0', build=he550d4f_1_cpython]\"\npixi global install python=3.11.0=h10a6764_1_cpython\n\n# Install for a specific platform, only useful on osx-arm64\npixi global install --platform osx-64 ruff\n\n# Install without inserting activation code into the executable script\npixi global install ruff --no-activation\n
Tip
Running osx-64
on Apple Silicon will install the Intel binary but run it using Rosetta
pixi global install --platform osx-64 ruff\n
After using global install, you can use the package you installed anywhere on your system.
"},{"location":"reference/cli/#global-list","title":"global list
","text":"This command shows the current installed global environments including what binaries come with it. A global installed package/environment can possibly contain multiple binaries and they will be listed out in the command output. Here is an example of a few installed packages:
> pixi global list\nGlobal install location: /home/hanabi/.pixi\n\u251c\u2500\u2500 bat 0.24.0\n| \u2514\u2500 exec: bat\n\u251c\u2500\u2500 conda-smithy 3.31.1\n| \u2514\u2500 exec: feedstocks, conda-smithy\n\u251c\u2500\u2500 rattler-build 0.13.0\n| \u2514\u2500 exec: rattler-build\n\u251c\u2500\u2500 ripgrep 14.1.0\n| \u2514\u2500 exec: rg\n\u2514\u2500\u2500 uv 0.1.17\n \u2514\u2500 exec: uv\n
"},{"location":"reference/cli/#global-upgrade","title":"global upgrade
","text":"This command upgrades a globally installed package (to the latest version by default).
"},{"location":"reference/cli/#arguments_20","title":"Arguments","text":"<PACKAGE>
: The package to upgrade.--channel <CHANNEL> (-c)
: specify a channel that the project uses. Defaults to conda-forge
. Note the channel the package was installed from will be always used for upgrade. (Allowed to be used more than once)--platform <PLATFORM> (-p)
: specify a platform that you want to upgrade the package for. (default: current platform)pixi global upgrade ruff\npixi global upgrade --channel conda-forge --channel bioconda trackplot\n# Or in a more concise form\npixi global upgrade -c conda-forge -c bioconda trackplot\n\n# Conda matchspec is supported\n# You can specify the version to upgrade to when you don't want the latest version\n# or you can even use it to downgrade a globally installed package\npixi global upgrade python=3.10\n
"},{"location":"reference/cli/#global-upgrade-all","title":"global upgrade-all
","text":"This command upgrades all globally installed packages to their latest version.
"},{"location":"reference/cli/#options_26","title":"Options","text":"--channel <CHANNEL> (-c)
: specify a channel that the project uses. Defaults to conda-forge
. Note the channel the package was installed from will be always used for upgrade. (Allowed to be used more than once)pixi global upgrade-all\npixi global upgrade-all --channel conda-forge --channel bioconda\n# Or in a more concise form\npixi global upgrade-all -c conda-forge -c bioconda trackplot\n
"},{"location":"reference/cli/#global-remove","title":"global remove
","text":"Removes a package previously installed into a globally accessible location via pixi global install
Use pixi global info
to find out what the package name is that belongs to the tool you want to remove.
<PACKAGE>
: The package(s) to remove.pixi global remove pre-commit\n\n# multiple packages can be removed at once\npixi global remove pre-commit starship\n
"},{"location":"reference/cli/#project","title":"project
","text":"This subcommand allows you to modify the project configuration through the command line interface.
"},{"location":"reference/cli/#options_27","title":"Options","text":"--manifest-path <MANIFEST_PATH>
: the path to manifest file, by default it searches for one in the parent directories.project channel add
","text":"Add channels to the channel list in the project configuration. When you add channels, the channels are tested for existence, added to the lock file and the environment is reinstalled.
"},{"location":"reference/cli/#arguments_22","title":"Arguments","text":"<CHANNEL>
: The channels to add, name or URL.--no-install
: do not update the environment, only add changed packages to the lock-file.--feature <FEATURE> (-f)
: The feature for which the channel is added.pixi project channel add robostack\npixi project channel add bioconda conda-forge robostack\npixi project channel add file:///home/user/local_channel\npixi project channel add https://repo.prefix.dev/conda-forge\npixi project channel add --no-install robostack\npixi project channel add --feature cuda nvidia\n
"},{"location":"reference/cli/#project-channel-list","title":"project channel list
","text":"List the channels in the manifest file
"},{"location":"reference/cli/#options_29","title":"Options","text":"urls
: show the urls of the channels instead of the names.$ pixi project channel list\nEnvironment: default\n- conda-forge\n\n$ pixi project channel list --urls\nEnvironment: default\n- https://conda.anaconda.org/conda-forge/\n
"},{"location":"reference/cli/#project-channel-remove","title":"project channel remove
","text":"List the channels in the manifest file
"},{"location":"reference/cli/#arguments_23","title":"Arguments","text":"<CHANNEL>...
: The channels to remove, name(s) or URL(s).--no-install
: do not update the environment, only add changed packages to the lock-file.--feature <FEATURE> (-f)
: The feature for which the channel is removed.pixi project channel remove conda-forge\npixi project channel remove https://conda.anaconda.org/conda-forge/\npixi project channel remove --no-install conda-forge\npixi project channel remove --feature cuda nvidia\n
"},{"location":"reference/cli/#project-description-get","title":"project description get
","text":"Get the project description.
$ pixi project description get\nPackage management made easy!\n
"},{"location":"reference/cli/#project-description-set","title":"project description set
","text":"Set the project description.
"},{"location":"reference/cli/#arguments_24","title":"Arguments","text":"<DESCRIPTION>
: The description to set.pixi project description set \"my new description\"\n
"},{"location":"reference/cli/#project-environment-add","title":"project environment add
","text":"Add an environment to the manifest file.
"},{"location":"reference/cli/#arguments_25","title":"Arguments","text":"<NAME>
: The name of the environment to add.-f, --feature <FEATURES>
: Features to add to the environment.--solve-group <SOLVE_GROUP>
: The solve-group to add the environment to.--no-default-feature
: Don't include the default feature in the environment.--force
: Update the manifest even if the environment already exists.pixi project environment add env1 --feature feature1 --feature feature2\npixi project environment add env2 -f feature1 --solve-group test\npixi project environment add env3 -f feature1 --no-default-feature\npixi project environment add env3 -f feature1 --force\n
"},{"location":"reference/cli/#project-environment-remove","title":"project environment remove
","text":"Remove an environment from the manifest file.
"},{"location":"reference/cli/#arguments_26","title":"Arguments","text":"<NAME>
: The name of the environment to remove.pixi project environment remove env1\n
"},{"location":"reference/cli/#project-environment-list","title":"project environment list
","text":"List the environments in the manifest file.
pixi project environment list\n
"},{"location":"reference/cli/#project-export-conda_environment","title":"project export conda_environment
","text":"Exports a conda environment.yml
file. The file can be used to create a conda environment using conda/mamba:
pixi project export conda-environment environment.yml\nmamba create --name <env> --file environment.yml\n
"},{"location":"reference/cli/#arguments_27","title":"Arguments","text":"<OUTPUT_PATH>
: Optional path to render environment.yml to. Otherwise it will be printed to standard out.--environment <ENVIRONMENT> (-e)
: Environment to render.--platform <PLATFORM> (-p)
: The platform to render.pixi project export conda-environment --environment lint\npixi project export conda-environment --platform linux-64 environment.linux-64.yml\n
"},{"location":"reference/cli/#project-export-conda_explicit_spec","title":"project export conda_explicit_spec
","text":"Render a platform-specific conda explicit specification file for an environment. The file can be then used to create a conda environment using conda/mamba:
mamba create --name <env> --file <explicit spec file>\n
As the explicit specification file format does not support pypi-dependencies, use the --ignore-pypi-errors
option to ignore those dependencies.
<OUTPUT_DIR>
: Output directory for rendered explicit environment spec files.--environment <ENVIRONMENT> (-e)
: Environment to render. Can be repeated for multiple envs. Defaults to all environments.--platform <PLATFORM> (-p)
: The platform to render. Can be repeated for multiple platforms. Defaults to all platforms available for selected environments.--ignore-pypi-errors
: PyPI dependencies are not supported in the conda explicit spec file. This flag allows creating the spec file even if PyPI dependencies are present.pixi project export conda_explicit_spec output\npixi project export conda_explicit_spec -e default -e test -p linux-64 output\n
"},{"location":"reference/cli/#project-platform-add","title":"project platform add
","text":"Adds a platform(s) to the manifest file and updates the lock file.
"},{"location":"reference/cli/#arguments_29","title":"Arguments","text":"<PLATFORM>...
: The platforms to add.--no-install
: do not update the environment, only add changed packages to the lock-file.--feature <FEATURE> (-f)
: The feature for which the platform will be added.pixi project platform add win-64\npixi project platform add --feature test win-64\n
"},{"location":"reference/cli/#project-platform-list","title":"project platform list
","text":"List the platforms in the manifest file.
$ pixi project platform list\nosx-64\nlinux-64\nwin-64\nosx-arm64\n
"},{"location":"reference/cli/#project-platform-remove","title":"project platform remove
","text":"Remove platform(s) from the manifest file and updates the lock file.
"},{"location":"reference/cli/#arguments_30","title":"Arguments","text":"<PLATFORM>...
: The platforms to remove.--no-install
: do not update the environment, only add changed packages to the lock-file.--feature <FEATURE> (-f)
: The feature for which the platform will be removed.pixi project platform remove win-64\npixi project platform remove --feature test win-64\n
"},{"location":"reference/cli/#project-version-get","title":"project version get
","text":"Get the project version.
$ pixi project version get\n0.11.0\n
"},{"location":"reference/cli/#project-version-set","title":"project version set
","text":"Set the project version.
"},{"location":"reference/cli/#arguments_31","title":"Arguments","text":"<VERSION>
: The version to set.pixi project version set \"0.13.0\"\n
"},{"location":"reference/cli/#project-version-majorminorpatch","title":"project version {major|minor|patch}
","text":"Bump the project version to {MAJOR|MINOR|PATCH}.
pixi project version major\npixi project version minor\npixi project version patch\n
An up-to-date lock file means that the dependencies in the lock file are allowed by the dependencies in the manifest file. For example
python = \">= 3.11\"
is up-to-date with a name: python, version: 3.11.0
in the pixi.lock
.python = \">= 3.12\"
is not up-to-date with a name: python, version: 3.11.0
in the pixi.lock
.Being up-to-date does not mean that the lock file holds the latest version available on the channel for the given dependency.\u00a0\u21a9\u21a9\u21a9\u21a9\u21a9\u21a9
Apart from the project specific configuration pixi supports configuration options which are not required for the project to work but are local to the machine. The configuration is loaded in the following order:
LinuxmacOSWindows Priority Location Comments 1/etc/pixi/config.toml
System-wide configuration 2 $XDG_CONFIG_HOME/pixi/config.toml
XDG compliant user-specific configuration 3 $HOME/.config/pixi/config.toml
User-specific configuration 4 $PIXI_HOME/config.toml
Global configuration in the user home directory. PIXI_HOME
defaults to ~/.pixi
5 your_project/.pixi/config.toml
Project-specific configuration 6 Command line arguments (--tls-no-verify
, --change-ps1=false
, etc.) Configuration via command line arguments Priority Location Comments 1 /etc/pixi/config.toml
System-wide configuration 2 $XDG_CONFIG_HOME/pixi/config.toml
XDG compliant user-specific configuration 3 $HOME/Library/Application Support/pixi/config.toml
User-specific configuration 4 $PIXI_HOME/config.toml
Global configuration in the user home directory. PIXI_HOME
defaults to ~/.pixi
5 your_project/.pixi/config.toml
Project-specific configuration 6 Command line arguments (--tls-no-verify
, --change-ps1=false
, etc.) Configuration via command line arguments Priority Location Comments 1 C:\\ProgramData\\pixi\\config.toml
System-wide configuration 2 %APPDATA%\\pixi\\config.toml
User-specific configuration 3 $PIXI_HOME\\config.toml
Global configuration in the user home directory. PIXI_HOME
defaults to %USERPROFILE%/.pixi
4 your_project\\.pixi\\config.toml
Project-specific configuration 5 Command line arguments (--tls-no-verify
, --change-ps1=false
, etc.) Configuration via command line arguments Note
The highest priority wins. If a configuration file is found in a higher priority location, the values from the configuration read from lower priority locations are overwritten.
Note
To find the locations where pixi
looks for configuration files, run pixi
with -vv
.
In versions of pixi 0.20.1
and older the global configuration used snake_case we've changed to kebab-case
for consistency with the rest of the configuration. But we still support the old snake_case
configuration, for older configuration options. These are:
default_channels
change_ps1
tls_no_verify
authentication_override_file
mirrors
and sub-optionsrepodata-config
and sub-optionsThe following reference describes all available configuration options.
"},{"location":"reference/pixi_configuration/#default-channels","title":"default-channels
","text":"The default channels to select when running pixi init
or pixi global install
. This defaults to only conda-forge. config.toml
default-channels = [\"conda-forge\"]\n
Note
The default-channels
are only used when initializing a new project. Once initialized the channels
are used from the project manifest.
change-ps1
","text":"When set to false, the (pixi)
prefix in the shell prompt is removed. This applies to the pixi shell
subcommand. You can override this from the CLI with --change-ps1
.
change-ps1 = true\n
"},{"location":"reference/pixi_configuration/#tls-no-verify","title":"tls-no-verify
","text":"When set to true, the TLS certificates are not verified.
Warning
This is a security risk and should only be used for testing purposes or internal networks.
You can override this from the CLI with --tls-no-verify
.
tls-no-verify = false\n
"},{"location":"reference/pixi_configuration/#authentication-override-file","title":"authentication-override-file
","text":"Override from where the authentication information is loaded. Usually, we try to use the keyring to load authentication data from, and only use a JSON file as a fallback. This option allows you to force the use of a JSON file. Read more in the authentication section. config.toml
authentication-override-file = \"/path/to/your/override.json\"\n
"},{"location":"reference/pixi_configuration/#detached-environments","title":"detached-environments
","text":"The directory where pixi stores the project environments, what would normally be placed in the .pixi/envs
folder in a project's root. It doesn't affect the environments built for pixi global
. The location of environments created for a pixi global
installation can be controlled using the PIXI_HOME
environment variable.
Warning
We recommend against using this because any environment created for a project is no longer placed in the same folder as the project. This creates a disconnect between the project and its environments and manual cleanup of the environments is required when deleting the project.
However, in some cases, this option can still be very useful, for instance to:
This field can consist of two types of input.
true
or false
, which will enable or disable the feature respectively. (not \"true\"
or \"false\"
, this is read as false
)config.toml
detached-environments = true\n
or: config.tomldetached-environments = \"/opt/pixi/envs\"\n
The environments will be stored in the cache directory when this option is true
. When you specify a custom path the environments will be stored in that directory.
The resulting directory structure will look like this: config.toml
detached-environments = \"/opt/pixi/envs\"\n
/opt/pixi/envs\n\u251c\u2500\u2500 pixi-6837172896226367631\n\u2502 \u2514\u2500\u2500 envs\n\u2514\u2500\u2500 NAME_OF_PROJECT-HASH_OF_ORIGINAL_PATH\n \u251c\u2500\u2500 envs # the runnable environments\n \u2514\u2500\u2500 solve-group-envs # If there are solve groups\n
"},{"location":"reference/pixi_configuration/#pinning-strategy","title":"pinning-strategy
","text":"The strategy to use for pinning dependencies when running pixi add
. The default is semver
but you can set the following:
no-pin
: No pinning, resulting in an unconstraint dependency. *
semver
: Pinning to the latest version that satisfies the semver constraint. Resulting in a pin to major for most versions and to minor for v0
versions.exact-version
: Pinning to the exact version, 1.2.3
-> ==1.2.3
.major
: Pinning to the major version, 1.2.3
-> >=1.2.3, <2
.minor
: Pinning to the minor version, 1.2.3
-> >=1.2.3, <1.3
.latest-up
: Pinning to the latest version, 1.2.3
-> >=1.2.3
.pinning-strategy = \"no-pin\"\n
"},{"location":"reference/pixi_configuration/#mirrors","title":"mirrors
","text":"Configuration for conda channel-mirrors, more info below.
config.toml[mirrors]\n# redirect all requests for conda-forge to the prefix.dev mirror\n\"https://conda.anaconda.org/conda-forge\" = [\n \"https://prefix.dev/conda-forge\"\n]\n\n# redirect all requests for bioconda to one of the three listed mirrors\n# Note: for repodata we try the first mirror first.\n\"https://conda.anaconda.org/bioconda\" = [\n \"https://conda.anaconda.org/bioconda\",\n # OCI registries are also supported\n \"oci://ghcr.io/channel-mirrors/bioconda\",\n \"https://prefix.dev/bioconda\",\n]\n
"},{"location":"reference/pixi_configuration/#repodata-config","title":"repodata-config
","text":"Configuration for repodata fetching. config.toml
[repodata-config]\n# disable fetching of jlap, bz2 or zstd repodata files.\n# This should only be used for specific old versions of artifactory and other non-compliant\n# servers.\ndisable-jlap = true # don't try to download repodata.jlap\ndisable-bzip2 = true # don't try to download repodata.json.bz2\ndisable-zstd = true # don't try to download repodata.json.zst\n
"},{"location":"reference/pixi_configuration/#pypi-config","title":"pypi-config
","text":"To setup a certain number of defaults for the usage of PyPI registries. You can use the following configuration options:
index-url
: The default index URL to use for PyPI packages. This will be added to a manifest file on a pixi init
.extra-index-urls
: A list of additional URLs to use for PyPI packages. This will be added to a manifest file on a pixi init
.keyring-provider
: Allows the use of the keyring python package to store and retrieve credentials.[pypi-config]\n# Main index url\nindex-url = \"https://pypi.org/simple\"\n# list of additional urls\nextra-index-urls = [\"https://pypi.org/simple2\"]\n# can be \"subprocess\" or \"disabled\"\nkeyring-provider = \"subprocess\"\n
index-url
and extra-index-urls
are not globals
Unlike pip, these settings, with the exception of keyring-provider
will only modify the pixi.toml
/pyproject.toml
file and are not globally interpreted when not present in the manifest. This is because we want to keep the manifest file as complete and reproducible as possible.
You can configure mirrors for conda channels. We expect that mirrors are exact copies of the original channel. The implementation will look for the mirror key (a URL) in the mirrors
section of the configuration file and replace the original URL with the mirror URL.
To also include the original URL, you have to repeat it in the list of mirrors.
The mirrors are prioritized based on the order of the list. We attempt to fetch the repodata (the most important file) from the first mirror in the list. The repodata contains all the SHA256 hashes of the individual packages, so it is important to get this file from a trusted source.
You can also specify mirrors for an entire \"host\", e.g.
config.toml[mirrors]\n\"https://conda.anaconda.org\" = [\n \"https://prefix.dev/\"\n]\n
This will forward all request to channels on anaconda.org to prefix.dev. Channels that are not currently mirrored on prefix.dev will fail in the above example.
"},{"location":"reference/pixi_configuration/#oci-mirrors","title":"OCI Mirrors","text":"You can also specify mirrors on the OCI registry. There is a public mirror on the Github container registry (ghcr.io) that is maintained by the conda-forge team. You can use it like this:
config.toml[mirrors]\n\"https://conda.anaconda.org/conda-forge\" = [\n \"oci://ghcr.io/channel-mirrors/conda-forge\"\n]\n
The GHCR mirror also contains bioconda
packages. You can search the available packages on Github.
The pixi.toml
is the pixi project configuration file, also known as the project manifest.
A toml
file is structured in different tables. This document will explain the usage of the different tables. For more technical documentation check pixi on crates.io.
Tip
We also support the pyproject.toml
file. It has the same structure as the pixi.toml
file. except that you need to prepend the tables with tool.pixi
instead of just the table name. For example, the [project]
table becomes [tool.pixi.project]
. There are also some small extras that are available in the pyproject.toml
file, checkout the pyproject.toml documentation for more information.
project
table","text":"The minimally required information in the project
table is:
[project]\nchannels = [\"conda-forge\"]\nname = \"project-name\"\nplatforms = [\"linux-64\"]\n
"},{"location":"reference/project_configuration/#name","title":"name
","text":"The name of the project.
name = \"project-name\"\n
"},{"location":"reference/project_configuration/#channels","title":"channels
","text":"This is a list that defines the channels used to fetch the packages from. If you want to use channels hosted on anaconda.org
you only need to use the name of the channel directly.
channels = [\"conda-forge\", \"robostack\", \"bioconda\", \"nvidia\", \"pytorch\"]\n
Channels situated on the file system are also supported with absolute file paths:
channels = [\"conda-forge\", \"file:///home/user/staged-recipes/build_artifacts\"]\n
To access private or public channels on prefix.dev or Quetz use the url including the hostname:
channels = [\"conda-forge\", \"https://repo.prefix.dev/channel-name\"]\n
"},{"location":"reference/project_configuration/#platforms","title":"platforms
","text":"Defines the list of platforms that the project supports. Pixi solves the dependencies for all these platforms and puts them in the lock file (pixi.lock
).
platforms = [\"win-64\", \"linux-64\", \"osx-64\", \"osx-arm64\"]\n
The available platforms are listed here: link
Special macOS behavior
macOS has two platforms: osx-64
for Intel Macs and osx-arm64
for Apple Silicon Macs. To support both, include both in your platforms list. Fallback: If osx-arm64
can't resolve, use osx-64
. Running osx-64
on Apple Silicon uses Rosetta for Intel binaries.
version
(optional)","text":"The version of the project. This should be a valid version based on the conda Version Spec. See the version documentation, for an explanation of what is allowed in a Version Spec.
version = \"1.2.3\"\n
"},{"location":"reference/project_configuration/#authors-optional","title":"authors
(optional)","text":"This is a list of authors of the project.
authors = [\"John Doe <j.doe@prefix.dev>\", \"Marie Curie <mss1867@gmail.com>\"]\n
"},{"location":"reference/project_configuration/#description-optional","title":"description
(optional)","text":"This should contain a short description of the project.
description = \"A simple description\"\n
"},{"location":"reference/project_configuration/#license-optional","title":"license
(optional)","text":"The license as a valid SPDX string (e.g. MIT AND Apache-2.0)
license = \"MIT\"\n
"},{"location":"reference/project_configuration/#license-file-optional","title":"license-file
(optional)","text":"Relative path to the license file.
license-file = \"LICENSE.md\"\n
"},{"location":"reference/project_configuration/#readme-optional","title":"readme
(optional)","text":"Relative path to the README file.
readme = \"README.md\"\n
"},{"location":"reference/project_configuration/#homepage-optional","title":"homepage
(optional)","text":"URL of the project homepage.
homepage = \"https://pixi.sh\"\n
"},{"location":"reference/project_configuration/#repository-optional","title":"repository
(optional)","text":"URL of the project source repository.
repository = \"https://github.com/prefix-dev/pixi\"\n
"},{"location":"reference/project_configuration/#documentation-optional","title":"documentation
(optional)","text":"URL of the project documentation.
documentation = \"https://pixi.sh\"\n
"},{"location":"reference/project_configuration/#conda-pypi-map-optional","title":"conda-pypi-map
(optional)","text":"Mapping of channel name or URL to location of mapping that can be URL/Path. Mapping should be structured in json
format where conda_name
: pypi_package_name
. Example:
{\n \"jupyter-ros\": \"my-name-from-mapping\",\n \"boltons\": \"boltons-pypi\"\n}\n
If conda-forge
is not present in conda-pypi-map
pixi
will use prefix.dev
mapping for it.
conda-pypi-map = { \"conda-forge\" = \"https://example.com/mapping\", \"https://repo.prefix.dev/robostack\" = \"local/robostack_mapping.json\"}\n
"},{"location":"reference/project_configuration/#channel-priority-optional","title":"channel-priority
(optional)","text":"This is the setting for the priority of the channels in the solver step.
Options:
strict
: Default, The channels are used in the order they are defined in the channels
list. Only packages from the first channel that has the package are used. This ensures that different variants for a single package are not mixed from different channels. Using packages from different incompatible channels like conda-forge
and main
can lead to hard to debug ABI incompatibilities.
We strongly recommend not to switch the default. - disabled
: There is no priority, all package variants from all channels will be set per package name and solved as one. Care should be taken when using this option. Since package variants can come from any channel when you use this mode, packages might not be compatible. This can cause hard to debug ABI incompatibilities.
We strongly discourage using this option.
channel-priority = \"disabled\"\n
channel-priority = \"disabled\"
is a security risk
Disabling channel priority may lead to unpredictable dependency resolutions. This is a possible security risk as it may lead to packages being installed from unexpected channels. It's advisable to maintain the default strict setting and order channels thoughtfully. If necessary, specify a channel directly for a dependency.
[project]\n# Putting conda-forge first solves most issues\nchannels = [\"conda-forge\", \"channel-name\"]\n[dependencies]\npackage = {version = \"*\", channel = \"channel-name\"}\n
"},{"location":"reference/project_configuration/#the-tasks-table","title":"The tasks
table","text":"Tasks are a way to automate certain custom commands in your project. For example, a lint
or format
step. Tasks in a pixi project are essentially cross-platform shell commands, with a unified syntax across platforms. For more in-depth information, check the Advanced tasks documentation. Pixi's tasks are run in a pixi environment using pixi run
and are executed using the deno_task_shell
.
[tasks]\nsimple = \"echo This is a simple task\"\ncmd = { cmd=\"echo Same as a simple task but now more verbose\"}\ndepending = { cmd=\"echo run after simple\", depends-on=\"simple\"}\nalias = { depends-on=[\"depending\"]}\ndownload = { cmd=\"curl -o file.txt https://example.com/file.txt\" , outputs=[\"file.txt\"]}\nbuild = { cmd=\"npm build\", cwd=\"frontend\", inputs=[\"frontend/package.json\", \"frontend/*.js\"]}\nrun = { cmd=\"python run.py $ARGUMENT\", env={ ARGUMENT=\"value\" }}\nformat = { cmd=\"black $INIT_CWD\" } # runs black where you run pixi run format\nclean-env = { cmd = \"python isolated.py\", clean-env = true} # Only on Unix!\n
You can modify this table using pixi task
.
Note
Specify different tasks for different platforms using the target table
Info
If you want to hide a task from showing up with pixi task list
or pixi info
, you can prefix the name with _
. For example, if you want to hide depending
, you can rename it to _depending
.
system-requirements
table","text":"The system requirements are used to define minimal system specifications used during dependency resolution.
For example, we can define a unix system with a specific minimal libc version.
[system-requirements]\nlibc = \"2.28\"\n
or make the project depend on a specific version of cuda
: [system-requirements]\ncuda = \"12\"\n
The options are:
linux
: The minimal version of the linux kernel.libc
: The minimal version of the libc library. Also allows specifying the family of the libc library. e.g. libc = { family=\"glibc\", version=\"2.28\" }
macos
: The minimal version of the macOS operating system.cuda
: The minimal version of the CUDA library.More information in the system requirements documentation.
"},{"location":"reference/project_configuration/#the-pypi-options-table","title":"Thepypi-options
table","text":"The pypi-options
table is used to define options that are specific to PyPI registries. These options can be specified either at the root level, which will add it to the default options feature, or on feature level, which will create a union of these options when the features are included in the environment.
The options that can be defined are:
index-url
: replaces the main index url.extra-index-urls
: adds an extra index url.find-links
: similar to --find-links
option in pip
.no-build-isolation
: disables build isolation, can only be set per package.index-strategy
: allows for specifying the index strategy to use.These options are explained in the sections below. Most of these options are taken directly or with slight modifications from the uv settings. If any are missing that you need feel free to create an issue requesting them.
"},{"location":"reference/project_configuration/#alternative-registries","title":"Alternative registries","text":"Strict Index Priority
Unlike pip, because we make use of uv, we have a strict index priority. This means that the first index is used where a package can be found. The order is determined by the order in the toml file. Where the extra-index-urls
are preferred over the index-url
. Read more about this on the uv docs
Often you might want to use an alternative or extra index for your project. This can be done by adding the pypi-options
table to your pixi.toml
file, the following options are available:
index-url
: replaces the main index url. If this is not set the default index used is https://pypi.org/simple
. Only one index-url
can be defined per environment.extra-index-urls
: adds an extra index url. The urls are used in the order they are defined. And are preferred over the index-url
. These are merged across features into an environment.find-links
: which can either be a path {path = './links'}
or a url {url = 'https://example.com/links'}
. This is similar to the --find-links
option in pip
. These are merged across features into an environment.An example:
[pypi-options]\nindex-url = \"https://pypi.org/simple\"\nextra-index-urls = [\"https://example.com/simple\"]\nfind-links = [{path = './links'}]\n
There are some examples in the pixi repository, that make use of this feature.
Authentication Methods
To read about existing authentication methods for private registries, please check the PyPI Authentication section.
"},{"location":"reference/project_configuration/#no-build-isolation","title":"No Build Isolation","text":"Even though build isolation is a good default. One can choose to not isolate the build for a certain package name, this allows the build to access the pixi
environment. This is convenient if you want to use torch
or something similar for your build-process.
[dependencies]\npytorch = \"2.4.0\"\n\n[pypi-options]\nno-build-isolation = [\"detectron2\"]\n\n[pypi-dependencies]\ndetectron2 = { git = \"https://github.com/facebookresearch/detectron2.git\", rev = \"5b72c27ae39f99db75d43f18fd1312e1ea934e60\"}\n
Conda dependencies define the build environment
To use no-build-isolation
effectively, use conda dependencies to define the build environment. These are installed before the PyPI dependencies are resolved, this way these dependencies are available during the build process. In the example above adding torch
as a PyPI dependency would be ineffective, as it would not yet be installed during the PyPI resolution phase.
The strategy to use when resolving against multiple index URLs. Description modified from the uv documentation:
By default, uv
and thus pixi
, will stop at the first index on which a given package is available, and limit resolutions to those present on that first index (first-match). This prevents dependency confusion attacks, whereby an attack can upload a malicious package under the same name to a secondary index.
One index strategy per environment
Only one index-strategy
can be defined per environment or solve-group, otherwise, an error will be shown.
a
is available on index x
and y
, it will prefer the version from x
unless you've requested a package version that is only available on y
.x
and y
that both contain package a
, it will take the best version from either x
or y
, but should that version be available on both indexes it will prefer x
.PyPI only
The index-strategy
only changes PyPI package resolution and not conda package resolution.
dependencies
table(s)","text":"This section defines what dependencies you would like to use for your project.
There are multiple dependencies tables. The default is [dependencies]
, which are dependencies that are shared across platforms.
Dependencies are defined using a VersionSpec. A VersionSpec
combines a Version with an optional operator.
Some examples are:
# Use this exact package version\npackage0 = \"1.2.3\"\n# Use 1.2.3 up to 1.3.0\npackage1 = \"~=1.2.3\"\n# Use larger than 1.2 lower and equal to 1.4\npackage2 = \">1.2,<=1.4\"\n# Bigger or equal than 1.2.3 or lower not including 1.0.0\npackage3 = \">=1.2.3|<1.0.0\"\n
Dependencies can also be defined as a mapping where it is using a matchspec:
package0 = { version = \">=1.2.3\", channel=\"conda-forge\" }\npackage1 = { version = \">=1.2.3\", build=\"py34_0\" }\n
Tip
The dependencies can be easily added using the pixi add
command line. Running add
for an existing dependency will replace it with the newest it can use.
Note
To specify different dependencies for different platforms use the target table
"},{"location":"reference/project_configuration/#dependencies","title":"dependencies
","text":"Add any conda package dependency that you want to install into the environment. Don't forget to add the channel to the project table should you use anything different than conda-forge
. Even if the dependency defines a channel that channel should be added to the project.channels
list.
[dependencies]\npython = \">3.9,<=3.11\"\nrust = \"1.72\"\npytorch-cpu = { version = \"~=1.1\", channel = \"pytorch\" }\n
"},{"location":"reference/project_configuration/#pypi-dependencies","title":"pypi-dependencies
","text":"Details regarding the PyPI integration We use uv
, which is a new fast pip replacement written in Rust.
We integrate uv as a library, so we use the uv resolver, to which we pass the conda packages as 'locked'. This disallows uv from installing these dependencies itself, and ensures it uses the exact version of these packages in the resolution. This is unique amongst conda based package managers, which usually just call pip from a subprocess.
The uv resolution is included in the lock file directly.
Pixi directly supports depending on PyPI packages, the PyPA calls a distributed package a 'distribution'. There are Source and Binary distributions both of which are supported by pixi. These distributions are installed into the environment after the conda environment has been resolved and installed. PyPI packages are not indexed on prefix.dev but can be viewed on pypi.org.
Important considerations
dependencies
table where possible.These dependencies don't follow the conda matchspec specification. The version
is a string specification of the version according to PEP404/PyPA. Additionally, a list of extra's can be included, which are essentially optional dependencies. Note that this version
is distinct from the conda MatchSpec type. See the example below to see how this is used in practice:
[dependencies]\n# When using pypi-dependencies, python is needed to resolve pypi dependencies\n# make sure to include this\npython = \">=3.6\"\n\n[pypi-dependencies]\nfastapi = \"*\" # This means any version (the wildcard `*` is a pixi addition, not part of the specification)\npre-commit = \"~=3.5.0\" # This is a single version specifier\n# Using the toml map allows the user to add `extras`\npandas = { version = \">=1.0.0\", extras = [\"dataframe\", \"sql\"]}\n\n# git dependencies\n# With ssh\nflask = { git = \"ssh://git@github.com/pallets/flask\" }\n# With https and a specific revision\nrequests = { git = \"https://github.com/psf/requests.git\", rev = \"0106aced5faa299e6ede89d1230bd6784f2c3660\" }\n# TODO: will support later -> branch = '' or tag = '' to specify a branch or tag\n\n# You can also directly add a source dependency from a path, tip keep this relative to the root of the project.\nminimal-project = { path = \"./minimal-project\", editable = true}\n\n# You can also use a direct url, to either a `.tar.gz` or `.zip`, or a `.whl` file\nclick = { url = \"https://github.com/pallets/click/releases/download/8.1.7/click-8.1.7-py3-none-any.whl\" }\n\n# You can also just the default git repo, it will checkout the default branch\npytest = { git = \"https://github.com/pytest-dev/pytest.git\"}\n
"},{"location":"reference/project_configuration/#full-specification","title":"Full specification","text":"The full specification of a PyPI dependencies that pixi supports can be split into the following fields:
"},{"location":"reference/project_configuration/#extras","title":"extras
","text":"A list of extras to install with the package. e.g. [\"dataframe\", \"sql\"]
The extras field works with all other version specifiers as it is an addition to the version specifier.
pandas = { version = \">=1.0.0\", extras = [\"dataframe\", \"sql\"]}\npytest = { git = \"URL\", extras = [\"dev\"]}\nblack = { url = \"URL\", extras = [\"cli\"]}\nminimal-project = { path = \"./minimal-project\", editable = true, extras = [\"dev\"]}\n
"},{"location":"reference/project_configuration/#version","title":"version
","text":"The version of the package to install. e.g. \">=1.0.0\"
or *
which stands for any version, this is pixi specific. Version is our default field so using no inline table ({}
) will default to this field.
py-rattler = \"*\"\nruff = \"~=1.0.0\"\npytest = {version = \"*\", extras = [\"dev\"]}\n
"},{"location":"reference/project_configuration/#git","title":"git
","text":"A git repository to install from. This support both https:// and ssh:// urls.
Use git
in combination with rev
or subdirectory
:
rev
: A specific revision to install. e.g. rev = \"0106aced5faa299e6ede89d1230bd6784f2c3660
subdirectory
: A subdirectory to install from. subdirectory = \"src\"
or subdirectory = \"src/packagex\"
# Note don't forget the `ssh://` or `https://` prefix!\npytest = { git = \"https://github.com/pytest-dev/pytest.git\"}\nrequests = { git = \"https://github.com/psf/requests.git\", rev = \"0106aced5faa299e6ede89d1230bd6784f2c3660\" }\npy-rattler = { git = \"ssh://git@github.com/mamba-org/rattler.git\", subdirectory = \"py-rattler\" }\n
"},{"location":"reference/project_configuration/#path","title":"path
","text":"A local path to install from. e.g. path = \"./path/to/package\"
We would advise to keep your path projects in the project, and to use a relative path.
Set editable
to true
to install in editable mode, this is highly recommended as it is hard to reinstall if you're not using editable mode. e.g. editable = true
minimal-project = { path = \"./minimal-project\", editable = true}\n
"},{"location":"reference/project_configuration/#url","title":"url
","text":"A URL to install a wheel or sdist directly from an url.
pandas = {url = \"https://files.pythonhosted.org/packages/3d/59/2afa81b9fb300c90531803c0fd43ff4548074fa3e8d0f747ef63b3b5e77a/pandas-2.2.1.tar.gz\"}\n
Did you know you can use: add --pypi
? Use the --pypi
flag with the add
command to quickly add PyPI packages from the CLI. E.g pixi add --pypi flask
This does not support all the features of the pypi-dependencies
table yet.
sdist
)","text":"The Source Distribution Format is a source based format (sdist for short), that a package can include alongside the binary wheel format. Because these distributions need to be built, the need a python executable to do this. This is why python needs to be present in a conda environment. Sdists usually depend on system packages to be built, especially when compiling C/C++ based python bindings. Think for example of Python SDL2 bindings depending on the C library: SDL2. To help built these dependencies we activate the conda environment that includes these pypi dependencies before resolving. This way when a source distribution depends on gcc
for example, it's used from the conda environment instead of the system.
host-dependencies
","text":"This table contains dependencies that are needed to build your project but which should not be included when your project is installed as part of another project. In other words, these dependencies are available during the build but are no longer available when your project is installed. Dependencies listed in this table are installed for the architecture of the target machine.
[host-dependencies]\npython = \"~=3.10.3\"\n
Typical examples of host dependencies are:
python
here and an R package would list mro-base
or r-base
.openssl
, rapidjson
, or xtensor
.build-dependencies
","text":"This table contains dependencies that are needed to build the project. Different from dependencies
and host-dependencies
these packages are installed for the architecture of the build machine. This enables cross-compiling from one machine architecture to another.
[build-dependencies]\ncmake = \"~=3.24\"\n
Typical examples of build dependencies are:
cmake
is invoked on the build machine to generate additional code- or project-files which are then include in the compilation process.Info
The build target refers to the machine that will execute the build. Programs and libraries installed by these dependencies will be executed on the build machine.
For example, if you compile on a MacBook with an Apple Silicon chip but target Linux x86_64 then your build platform is osx-arm64
and your host platform is linux-64
.
activation
table","text":"The activation table is used for specialized activation operations that need to be run when the environment is activated.
There are two types of activation operations a user can modify in the manifest:
scripts
: A list of scripts that are run when the environment is activated.env
: A mapping of environment variables that are set when the environment is activated.These activation operations will be run before the pixi run
and pixi shell
commands.
Note
The activation operations are run by the system shell interpreter as they run before an environment is available. This means that it runs as cmd.exe
on windows and bash
on linux and osx (Unix). Only .sh
, .bash
and .bat
files are supported.
And the environment variables are set in the shell that is running the activation script, thus take note when using e.g. $
or %
.
If you have scripts or env variable per platform use the target table.
[activation]\nscripts = [\"env_setup.sh\"]\nenv = { ENV_VAR = \"value\" }\n\n# To support windows platforms as well add the following\n[target.win-64.activation]\nscripts = [\"env_setup.bat\"]\n\n[target.linux-64.activation.env]\nENV_VAR = \"linux-value\"\n\n# You can also reference existing environment variables, but this has\n# to be done separately for unix-like operating systems and Windows\n[target.unix.activation.env]\nENV_VAR = \"$OTHER_ENV_VAR/unix-value\"\n\n[target.win.activation.env]\nENV_VAR = \"%OTHER_ENV_VAR%\\\\windows-value\"\n
"},{"location":"reference/project_configuration/#the-target-table","title":"The target
table","text":"The target table is a table that allows for platform specific configuration. Allowing you to make different sets of tasks or dependencies per platform.
The target table is currently implemented for the following sub-tables:
activation
dependencies
tasks
The target table is defined using [target.PLATFORM.SUB-TABLE]
. E.g [target.linux-64.dependencies]
The platform can be any of:
win
, osx
, linux
or unix
(unix
matches linux
and osx
)linux-64
, osx-arm64
The sub-table can be any of the specified above.
To make it a bit more clear, let's look at an example below. Currently, pixi combines the top level tables like dependencies
with the target-specific ones into a single set. Which, in the case of dependencies, can both add or overwrite dependencies. In the example below, we have cmake
being used for all targets but on osx-64
or osx-arm64
a different version of python will be selected.
[dependencies]\ncmake = \"3.26.4\"\npython = \"3.10\"\n\n[target.osx.dependencies]\npython = \"3.11\"\n
Here are some more examples:
[target.win-64.activation]\nscripts = [\"setup.bat\"]\n\n[target.win-64.dependencies]\nmsmpi = \"~=10.1.1\"\n\n[target.win-64.build-dependencies]\nvs2022_win-64 = \"19.36.32532\"\n\n[target.win-64.tasks]\ntmp = \"echo $TEMP\"\n\n[target.osx-64.dependencies]\nclang = \">=16.0.6\"\n
"},{"location":"reference/project_configuration/#the-feature-and-environments-tables","title":"The feature
and environments
tables","text":"The feature
table allows you to define features that can be used to create different [environments]
. The [environments]
table allows you to define different environments. The design is explained in the this design document.
[feature.test.dependencies]\npytest = \"*\"\n\n[environments]\ntest = [\"test\"]\n
This will create an environment called test
that has pytest
installed.
feature
table","text":"The feature
table allows you to define the following fields per feature.
dependencies
: Same as the dependencies.pypi-dependencies
: Same as the pypi-dependencies.pypi-options
: Same as the pypi-options.system-requirements
: Same as the system-requirements.activation
: Same as the activation.platforms
: Same as the platforms. Unless overridden, the platforms
of the feature will be those defined at project level.channels
: Same as the channels. Unless overridden, the channels
of the feature will be those defined at project level.channel-priority
: Same as the channel-priority.target
: Same as the target.tasks
: Same as the tasks.These tables are all also available without the feature
prefix. When those are used we call them the default
feature. This is a protected name you can not use for your own feature.
[feature.cuda]\nactivation = {scripts = [\"cuda_activation.sh\"]}\n# Results in: [\"nvidia\", \"conda-forge\"] when the default is `conda-forge`\nchannels = [\"nvidia\"]\ndependencies = {cuda = \"x.y.z\", cudnn = \"12.0\"}\npypi-dependencies = {torch = \"==1.9.0\"}\nplatforms = [\"linux-64\", \"osx-arm64\"]\nsystem-requirements = {cuda = \"12\"}\ntasks = { warmup = \"python warmup.py\" }\ntarget.osx-arm64 = {dependencies = {mlx = \"x.y.z\"}}\n
Cuda feature table example but written as separate tables[feature.cuda.activation]\nscripts = [\"cuda_activation.sh\"]\n\n[feature.cuda.dependencies]\ncuda = \"x.y.z\"\ncudnn = \"12.0\"\n\n[feature.cuda.pypi-dependencies]\ntorch = \"==1.9.0\"\n\n[feature.cuda.system-requirements]\ncuda = \"12\"\n\n[feature.cuda.tasks]\nwarmup = \"python warmup.py\"\n\n[feature.cuda.target.osx-arm64.dependencies]\nmlx = \"x.y.z\"\n\n# Channels and Platforms are not available as separate tables as they are implemented as lists\n[feature.cuda]\nchannels = [\"nvidia\"]\nplatforms = [\"linux-64\", \"osx-arm64\"]\n
"},{"location":"reference/project_configuration/#the-environments-table","title":"The environments
table","text":"The [environments]
table allows you to define environments that are created using the features defined in the [feature]
tables.
The environments table is defined using the following fields:
features
: The features that are included in the environment. Unless no-default-feature
is set to true
, the default feature is implicitly included in the environment.solve-group
: The solve group is used to group environments together at the solve stage. This is useful for environments that need to have the same dependencies but might extend them with additional dependencies. For instance when testing a production environment with additional test dependencies. These dependencies will then be the same version in all environments that have the same solve group. But the different environments contain different subsets of the solve-groups dependencies set.no-default-feature
: Whether to include the default feature in that environment. The default is false
, to include the default feature.Full environments table specification
[environments]\ntest = {features = [\"test\"], solve-group = \"test\"}\nprod = {features = [\"prod\"], solve-group = \"test\"}\nlint = {features = [\"lint\"], no-default-feature = true}\n
As shown in the example above, in the simplest of cases, it is possible to define an environment only by listing its features: Simplest example[environments]\ntest = [\"test\"]\n
is equivalent to
Simplest example expanded[environments]\ntest = {features = [\"test\"]}\n
When an environment comprises several features (including the default feature): - The activation
and tasks
of the environment are the union of the activation
and tasks
of all its features. - The dependencies
and pypi-dependencies
of the environment are the union of the dependencies
and pypi-dependencies
of all its features. This means that if several features define a requirement for the same package, both requirements will be combined. Beware of conflicting requirements across features added to the same environment. - The system-requirements
of the environment is the union of the system-requirements
of all its features. If multiple features specify a requirement for the same system package, the highest version is chosen. - The channels
of the environment is the union of the channels
of all its features. Channel priorities can be specified in each feature, to ensure channels are considered in the right order in the environment. - The platforms
of the environment is the intersection of the platforms
of all its features. Be aware that the platforms supported by a feature (including the default feature) will be considered as the platforms
defined at project level (unless overridden in the feature). This means that it is usually a good idea to set the project platforms
to all platforms it can support across its environments.
The global configuration options are documented in the global configuration section.
"},{"location":"switching_from/conda/","title":"Transitioning from theconda
or mamba
to pixi
","text":"Welcome to the guide designed to ease your transition from conda
or mamba
to pixi
. This document compares key commands and concepts between these tools, highlighting pixi
's unique approach to managing environments and packages. With pixi
, you'll experience a project-based workflow, enhancing your development process, and allowing for easy sharing of your work.
Pixi
builds upon the foundation of the conda ecosystem, introducing a project-centric approach rather than focusing solely on environments. This shift towards projects offers a more organized and efficient way to manage dependencies and run code, tailored to modern development practices.
conda create -n myenv -c conda-forge python=3.8
pixi init myenv
followed by pixi add python=3.8
Activating an Environment conda activate myenv
pixi shell
within the project directory Deactivating an Environment conda deactivate
exit
from the pixi shell
Running a Task conda run -n myenv python my_program.py
pixi run python my_program.py
(See run) Installing a Package conda install numpy
pixi add numpy
Uninstalling a Package conda remove numpy
pixi remove numpy
No base
environment
Conda has a base environment, which is the default environment when you start a new shell. Pixi does not have a base environment. And requires you to install the tools you need in the project or globally. Using pixi global install bat
will install bat
in a global environment, which is not the same as the base
environment in conda.
For some advanced use-cases, you can activate the environment in the current shell. This uses the pixi shell-hook
which prints the activation script, which can be used to activate the environment in the current shell without pixi
itself.
~/myenv > eval \"$(pixi shell-hook)\"\n
"},{"location":"switching_from/conda/#environment-vs-project","title":"Environment vs Project","text":"Conda
and mamba
focus on managing environments, while pixi
emphasizes projects. In pixi
, a project is a folder containing a manifest(pixi.toml
/pyproject.toml
) file that describes the project, a pixi.lock
lock-file that describes the exact dependencies, and a .pixi
folder that contains the environment.
This project-centric approach allows for easy sharing and collaboration, as the project folder contains all the necessary information to recreate the environment. It manages more than one environment for more than one platform in a single project, and allows for easy switching between them. (See multiple environments)
"},{"location":"switching_from/conda/#global-environments","title":"Global environments","text":"conda
installs all environments in one global location. When this is important to you for filesystem reasons, you can use the detached-environments feature of pixi.
pixi config set detached-environments true\n# or a specific location\npixi config set detached-environments /path/to/envs\n
This doesn't allow you to activate the environments using pixi shell -n
but it will make the installation of the environments go to the same folder. pixi
does have the pixi global
command to install tools on your machine. (See global) This is not a replacement for conda
but works the same as pipx
and condax
. It creates a single isolated environment for the given requirement and installs the binaries into the global path.
pixi global install bat\nbat pixi.toml\n
Never install pip
with pixi global
Installations with pixi global
get their own isolated environment. Installing pip
with pixi global
will create a new isolated environment with its own pip
binary. Using that pip
binary will install packages in the pip
environment, making it unreachable form anywhere as you can't activate it.
With pixi
you can import environment.yml
files into a pixi project. (See import)
pixi init --import environment.yml\n
This will create a new project with the dependencies from the environment.yml
file. Exporting your environment If you are working with Conda users or systems, you can export your environment to a environment.yml
file to share them.
pixi project export conda\n
Additionally you can export a conda explicit specification."},{"location":"switching_from/conda/#troubleshooting","title":"Troubleshooting","text":"Encountering issues? Here are solutions to some common problems when being used to the conda
workflow:
is excluded because due to strict channel priority not using this option from: 'https://conda.anaconda.org/conda-forge/'
This error occurs when the package is in multiple channels. pixi
uses a strict channel priority. See channel priority for more information.pixi global install pip
, pip doesn't work. pip
is installed in the global isolated environment. Use pixi add pip
in a project to install pip
in the project environment and use that project.pixi global install <Any Library>
-> import <Any Library>
-> ModuleNotFoundError: No module named '<Any Library>'
The library is installed in the global isolated environment. Use pixi add <Any Library>
in a project to install the library in the project environment and use that project.poetry
to pixi
","text":"Welcome to the guide designed to ease your transition from poetry
to pixi
. This document compares key commands and concepts between these tools, highlighting pixi
's unique approach to managing environments and packages. With pixi
, you'll experience a project-based workflow similar to poetry
while including the conda
ecosystem and allowing for easy sharing of your work.
Poetry is most-likely the closest tool to pixi in terms of project management, in the python ecosystem. On top of the PyPI ecosystem, pixi
adds the power of the conda ecosystem, allowing for a more flexible and powerful environment management.
poetry new myenv
pixi init myenv
Running a Task poetry run which python
pixi run which python
pixi
uses a built-in cross platform shell for run where poetry uses your shell. Installing a Package poetry add numpy
pixi add numpy
adds the conda variant. pixi add --pypi numpy
adds the PyPI variant. Uninstalling a Package poetry remove numpy
pixi remove numpy
removes the conda variant. pixi remove --pypi numpy
removes the PyPI variant. Building a package poetry build
We've yet to implement package building and publishing Publishing a package poetry publish
We've yet to implement package building and publishing Reading the pyproject.toml [tool.poetry]
[tool.pixi]
Defining dependencies [tool.poetry.dependencies]
[tool.pixi.dependencies]
for conda, [tool.pixi.pypi-dependencies]
or [project.dependencies]
for PyPI dependencies Dependency definition - numpy = \"^1.2.3\"
- numpy = \"~1.2.3\"
- numpy = \"*\"
- numpy = \">=1.2.3 <2.0.0\"
- numpy = \">=1.2.3 <1.3.0\"
- numpy = \"*\"
Lock file poetry.lock
pixi.lock
Environment directory ~/.cache/pypoetry/virtualenvs/myenv
./.pixi
Defaults to the project folder, move this using the detached-environments
"},{"location":"switching_from/poetry/#support-both-poetry-and-pixi-in-my-project","title":"Support both poetry
and pixi
in my project","text":"You can allow users to use poetry
and pixi
in the same project, they will not touch each other's parts of the configuration or system. It's best to duplicate the dependencies, basically making an exact copy of the tool.poetry.dependencies
into tool.pixi.pypi-dependencies
. Make sure that python
is only defined in the tool.pixi.dependencies
and not in the tool.pixi.pypi-dependencies
.
Mixing pixi
and poetry
It's possible to use poetry
in pixi
environments but this is advised against. Pixi supports PyPI dependencies in a different way than poetry
does, and mixing them can lead to unexpected behavior. As you can only use one package manager at a time, it's best to stick to one.
If using poetry on top of a pixi project, you'll always need to install the poetry
environment after the pixi
environment. And let pixi
handle the python
and poetry
installation.
In this tutorial, we will show you how to create a simple Python project with pixi. We will show some of the features that pixi provides, that are currently not a part of pdm
, poetry
etc.
Pixi builds upon the conda ecosystem, which allows you to create a Python environment with all the dependencies you need. This is especially useful when you are working with multiple Python interpreters and bindings to C and C++ libraries. For example, GDAL from PyPI does not have binary C dependencies, but the conda package does. On the other hand, some packages are only available through PyPI, which pixi
can also install for you. Best of both world, let's give it a go!
pixi.toml
and pyproject.toml
","text":"We support two manifest formats: pyproject.toml
and pixi.toml
. In this tutorial, we will use the pyproject.toml
format because it is the most common format for Python projects.
Let's start out by making a directory and creating a new pyproject.toml
file.
pixi init pixi-py --format pyproject\n
This gives you the following pyproject.toml:
[project]\nname = \"pixi-py\"\nversion = \"0.1.0\"\ndescription = \"Add a short description here\"\nauthors = [{name = \"Tim de Jager\", email = \"tim@prefix.dev\"}]\nrequires-python = \">= 3.11\"\ndependencies = []\n\n[build-system]\nbuild-backend = \"hatchling.build\"\nrequires = [\"hatchling\"]\n\n[tool.pixi.project]\nchannels = [\"conda-forge\"]\nplatforms = [\"osx-arm64\"]\n\n[tool.pixi.pypi-dependencies]\npixi-py = { path = \".\", editable = true }\n\n[tool.pixi.tasks]\n
Let's add the Python project to the tree:
Linux & macOSWindowscd pixi-py # move into the project directory\nmkdir pixi_py\ntouch pixi_py/__init__.py\n
cd pixi-py\nmkdir pixi_py\ntype nul > pixi_py\\__init__.py\n
We now have the following directory structure:
.\n\u251c\u2500\u2500 pixi_py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 __init__.py\n\u2514\u2500\u2500 pyproject.toml\n
We've used a flat-layout here but pixi supports both flat- and src-layouts.
"},{"location":"tutorials/python/#whats-in-the-pyprojecttoml","title":"What's in thepyproject.toml
?","text":"Okay, so let's have a look at what's sections have been added and how we can modify the pyproject.toml
.
These first entries were added to the pyproject.toml
file:
# Main pixi entry\n[tool.pixi.project]\nchannels = [\"conda-forge\"]\n# This is your machine platform by default\nplatforms = [\"osx-arm64\"]\n
The channels
and platforms
are added to the [tool.pixi.project]
section. Channels like conda-forge
manage packages similar to PyPI but allow for different packages across languages. The keyword platforms
determines what platform the project supports.
The pixi_py
package itself is added as an editable dependency. This means that the package is installed in editable mode, so you can make changes to the package and see the changes reflected in the environment, without having to re-install the environment.
# Editable installs\n[tool.pixi.pypi-dependencies]\npixi-py = { path = \".\", editable = true }\n
In pixi, unlike other package managers, this is explicitly stated in the pyproject.toml
file. The main reason being so that you can choose which environment this package should be included in.
Our projects usually depend on other packages.
$ pixi add black\nAdded black\n
This will result in the following addition to the pyproject.toml
:
# Dependencies\n[tool.pixi.dependencies]\nblack = \">=24.4.2,<24.5\"\n
But we can also be strict about the version that should be used with pixi add black=24
, resulting in
[tool.pixi.dependencies]\nblack = \"24.*\"\n
Now, let's add some optional dependencies:
pixi add --pypi --feature test pytest\n
Which results in the following fields added to the pyproject.toml
:
[project.optional-dependencies]\ntest = [\"pytest\"]\n
After we have added the optional dependencies to the pyproject.toml
, pixi automatically creates a feature
, which can contain a collection of dependencies
, tasks
, channels
, and more.
Sometimes there are packages that aren't available on conda channels but are published on PyPI. We can add these as well, which pixi will solve together with the default dependencies.
$ pixi add black --pypi\nAdded black\nAdded these as pypi-dependencies.\n
which results in the addition to the dependencies
key in the pyproject.toml
dependencies = [\"black\"]\n
When using the pypi-dependencies
you can make use of the optional-dependencies
that other packages make available. For example, black
makes the cli
dependencies option, which can be added with the --pypi
keyword:
$ pixi add black[cli] --pypi\nAdded black[cli]\nAdded these as pypi-dependencies.\n
which updates the dependencies
entry to
dependencies = [\"black[cli]\"]\n
Optional dependencies in pixi.toml
This tutorial focuses on the use of the pyproject.toml
, but in case you're curious, the pixi.toml
would contain the following entry after the installation of a PyPI package including an optional dependency:
[pypi-dependencies]\nblack = { version = \"*\", extras = [\"cli\"] }\n
"},{"location":"tutorials/python/#installation-pixi-install","title":"Installation: pixi install
","text":"Now let's install
the project with pixi install
:
$ pixi install\n\u2714 Project in /path/to/pixi-py is ready to use!\n
We now have a new directory called .pixi
in the project root. This directory contains the environment that was created when we ran pixi install
. The environment is a conda environment that contains the dependencies that we specified in the pyproject.toml
file. We can also install the test environment with pixi install -e test
. We can use these environments for executing code.
We also have a new file called pixi.lock
in the project root. This file contains the exact versions of the dependencies that were installed in the environment across platforms.
Using pixi list
, you can see what's in the environment, this is essentially a nicer view on the lock file:
$ pixi list\nPackage Version Build Size Kind Source\nbzip2 1.0.8 h93a5062_5 119.5 KiB conda bzip2-1.0.8-h93a5062_5.conda\nblack 24.4.2 3.8 MiB pypi black-24.4.2-cp312-cp312-win_amd64.http.whl\nca-certificates 2024.2.2 hf0a4a13_0 152.1 KiB conda ca-certificates-2024.2.2-hf0a4a13_0.conda\nlibexpat 2.6.2 hebf3989_0 62.2 KiB conda libexpat-2.6.2-hebf3989_0.conda\nlibffi 3.4.2 h3422bc3_5 38.1 KiB conda libffi-3.4.2-h3422bc3_5.tar.bz2\nlibsqlite 3.45.2 h091b4b1_0 806 KiB conda libsqlite-3.45.2-h091b4b1_0.conda\nlibzlib 1.2.13 h53f4e23_5 47 KiB conda libzlib-1.2.13-h53f4e23_5.conda\nncurses 6.4.20240210 h078ce10_0 801 KiB conda ncurses-6.4.20240210-h078ce10_0.conda\nopenssl 3.2.1 h0d3ecfb_1 2.7 MiB conda openssl-3.2.1-h0d3ecfb_1.conda\npython 3.12.3 h4a7b5fc_0_cpython 12.6 MiB conda python-3.12.3-h4a7b5fc_0_cpython.conda\nreadline 8.2 h92ec313_1 244.5 KiB conda readline-8.2-h92ec313_1.conda\ntk 8.6.13 h5083fa2_1 3 MiB conda tk-8.6.13-h5083fa2_1.conda\ntzdata 2024a h0c530f3_0 117 KiB conda tzdata-2024a-h0c530f3_0.conda\npixi-py 0.1.0 pypi . (editable)\nxz 5.2.6 h57fd34a_0 230.2 KiB conda xz-5.2.6-h57fd34a_0.tar.bz2\n
Python
The Python interpreter is also installed in the environment. This is because the Python interpreter version is read from the requires-python
field in the pyproject.toml
file. This is used to determine the Python version to install in the environment. This way, pixi automatically manages/bootstraps the Python interpreter for you, so no more brew
, apt
or other system install steps.
Here, you can see the different conda and Pypi packages listed. As you can see, the pixi-py
package that we are working on is installed in editable mode. Every environment in pixi is isolated but reuses files that are hard-linked from a central cache directory. This means that you can have multiple environments with the same packages but only have the individual files stored once on disk.
We can create the default
and test
environments based on our own test
feature from the optional-dependency
:
pixi project environment add default --solve-group default\npixi project environment add test --feature test --solve-group default\n
Which results in:
# Environments\n[tool.pixi.environments]\ndefault = { solve-group = \"default\" }\ntest = { features = [\"test\"], solve-group = \"default\" }\n
Solve Groups Solve groups are a way to group dependencies together. This is useful when you have multiple environments that share the same dependencies. For example, maybe pytest
is a dependency that influences the dependencies of the default
environment. By putting these in the same solve group, you ensure that the versions in test
and default
are exactly the same.
The default
environment is created when you run pixi install
. The test
environment is created from the optional dependencies in the pyproject.toml
file. You can execute commands in this environment with e.g. pixi run -e test python
Let's add some code to the pixi-py
package. We will add a new function to the pixi_py/__init__.py
file:
from rich import print\n\ndef hello():\n return \"Hello, [bold magenta]World[/bold magenta]!\", \":vampire:\"\n\ndef say_hello():\n print(*hello())\n
Now add the rich
dependency from PyPI using: pixi add --pypi rich
.
Let's see if this works by running:
pixi r python -c \"import pixi_py; pixi_py.say_hello()\"\nHello, World! \ud83e\udddb\n
Slow? This might be slow(2 minutes) the first time because pixi installs the project, but it will be near instant the second time.
Pixi runs the self installed Python interpreter. Then, we are importing the pixi_py
package, which is installed in editable mode. The code calls the say_hello
function that we just added. And it works! Cool!
Okay, so let's add a test for this function. Let's add a tests/test_me.py
file in the root of the project.
Giving us the following project structure:
.\n\u251c\u2500\u2500 pixi.lock\n\u251c\u2500\u2500 pixi_py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 __init__.py\n\u251c\u2500\u2500 pyproject.toml\n\u2514\u2500\u2500 tests/test_me.py\n
from pixi_py import hello\n\ndef test_pixi_py():\n assert hello() == (\"Hello, [bold magenta]World[/bold magenta]!\", \":vampire:\")\n
Let's add an easy task for running the tests.
$ pixi task add --feature test test \"pytest\"\n\u2714 Added task `test`: pytest .\n
So pixi has a task system to make it easy to run commands. Similar to npm
scripts or something you would specify in a Justfile
.
Tasks are actually a pretty cool pixi feature that is powerful and runs in a cross-platform shell. You can do caching, dependencies and more. Read more about tasks in the tasks section.
$ pixi r test\n\u2728 Pixi task (test): pytest .\n================================================================================================= test session starts =================================================================================================\nplatform darwin -- Python 3.12.2, pytest-8.1.1, pluggy-1.4.0\nrootdir: /private/tmp/pixi-py\nconfigfile: pyproject.toml\ncollected 1 item\n\ntest_me.py . [100%]\n\n================================================================================================== 1 passed in 0.00s =================================================================================================\n
Neat! It seems to be working!
"},{"location":"tutorials/python/#test-vs-default-environment","title":"Test vs Default environment","text":"The interesting thing is if we compare the output of the two environments.
pixi list -e test\n# v.s. default environment\npixi list\n
Is that the test environment has:
package version build size kind source\n...\npytest 8.1.1 1.1 mib pypi pytest-8.1.1-py3-none-any.whl\n...\n
But the default environment is missing this package. This way, you can finetune your environments to only have the packages that are needed for that environment. E.g. you could also have a dev
environment that has pytest
and ruff
installed, but you could omit these from the prod
environment. There is a docker example that shows how to set up a minimal prod
environment and copy from there.
Last thing, pixi provides the ability for pypi
packages to depend on conda
packages. Let's confirm this with pixi list
:
$ pixi list\nPackage Version Build Size Kind Source\n...\npygments 2.17.2 4.1 MiB pypi pygments-2.17.2-py3-none-any.http.whl\n...\n
Let's explicitly add pygments
to the pyproject.toml
file. Which is a dependency of the rich
package.
pixi add pygments\n
This will add the following to the pyproject.toml
file:
[tool.pixi.dependencies]\npygments = \">=2.17.2,<2.18\"\n
We can now see that the pygments
package is now installed as a conda package.
$ pixi list\nPackage Version Build Size Kind Source\n...\npygments 2.17.2 pyhd8ed1ab_0 840.3 KiB conda pygments-2.17.2-pyhd8ed1ab_0.conda\n
This way, PyPI dependencies and conda dependencies can be mixed and matched to seamlessly interoperate.
$ pixi r python -c \"import pixi_py; pixi_py.say_hello()\"\nHello, World! \ud83e\udddb\n
And it still works!
"},{"location":"tutorials/python/#conclusion","title":"Conclusion","text":"In this tutorial, you've seen how easy it is to use a pyproject.toml
to manage your pixi dependencies and environments. We have also explored how to use PyPI and conda dependencies seamlessly together in the same project and install optional dependencies to manage Python packages.
Hopefully, this provides a flexible and powerful way to manage your Python projects and a fertile base for further exploration with Pixi.
Thanks for reading! Happy Coding \ud83d\ude80
Any questions? Feel free to reach out or share this tutorial on X, join our Discord, send us an e-mail or follow our GitHub.
"},{"location":"tutorials/ros2/","title":"Tutorial: Develop a ROS 2 package withpixi
","text":"In this tutorial, we will show you how to develop a ROS 2 package using pixi
. The tutorial is written to be executed from top to bottom, missing steps might result in errors.
The audience for this tutorial is developers who are familiar with ROS 2 and how are interested to try pixi for their development workflow.
"},{"location":"tutorials/ros2/#prerequisites","title":"Prerequisites","text":"pixi
installed. If you haven't installed it yet, you can follow the instructions in the installation guide. The crux of this tutorial is to show you only need pixi!If you're new to pixi, you can check out the basic usage guide. This will teach you the basics of pixi project within 3 minutes.
"},{"location":"tutorials/ros2/#create-a-pixi-project","title":"Create a pixi project.","text":"pixi init my_ros2_project -c robostack-staging -c conda-forge\ncd my_ros2_project\n
It should have created a directory structure like this:
my_ros2_project\n\u251c\u2500\u2500 .gitattributes\n\u251c\u2500\u2500 .gitignore\n\u2514\u2500\u2500 pixi.toml\n
The pixi.toml
file is the manifest file for your project. It should look like this:
[project]\nname = \"my_ros2_project\"\nversion = \"0.1.0\"\ndescription = \"Add a short description here\"\nauthors = [\"User Name <user.name@email.url>\"]\nchannels = [\"robostack-staging\", \"conda-forge\"]\n# Your project can support multiple platforms, the current platform will be automatically added.\nplatforms = [\"linux-64\"]\n\n[tasks]\n\n[dependencies]\n
The channels
you added to the init
command are repositories of packages, you can search in these repositories through our prefix.dev website. The platforms
are the systems you want to support, in pixi you can support multiple platforms, but you have to define which platforms, so pixi can test if those are supported for your dependencies. For the rest of the fields, you can fill them in as you see fit.
To use a pixi project you don't need any dependencies on your system, all the dependencies you need should be added through pixi, so other users can use your project without any issues.
Let's start with the turtlesim
example
pixi add ros-humble-desktop ros-humble-turtlesim\n
This will add the ros-humble-desktop
and ros-humble-turtlesim
packages to your manifest. Depending on your internet speed this might take a minute, as it will also install ROS in your project folder (.pixi
).
Now run the turtlesim
example.
pixi run ros2 run turtlesim turtlesim_node\n
Or use the shell
command to start an activated environment in your terminal.
pixi shell\nros2 run turtlesim turtlesim_node\n
Congratulations you have ROS 2 running on your machine with pixi!
Some more fun with the turtleTo control the turtle you can run the following command in a new terminal
cd my_ros2_project\npixi run ros2 run turtlesim turtle_teleop_key\n
Now you can control the turtle with the arrow keys on your keyboard.
"},{"location":"tutorials/ros2/#add-a-custom-python-node","title":"Add a custom Python node","text":"As ros works with custom nodes, let's add a custom node to our project.
pixi run ros2 pkg create --build-type ament_python --destination-directory src --node-name my_node my_package\n
To build the package we need some more dependencies:
pixi add colcon-common-extensions \"setuptools<=58.2.0\"\n
Add the created initialization script for the ros workspace to your manifest file.
Then run the build command
pixi run colcon build\n
This will create a sourceable script in the install
folder, you can source this script through an activation script to use your custom node. Normally this would be the script you add to your .bashrc
but now you tell pixi to use it.
[activation]\nscripts = [\"install/setup.sh\"]\n
pixi.toml[activation]\nscripts = [\"install/setup.bat\"]\n
Multi platform support You can add multiple activation scripts for different platforms, so you can support multiple platforms with one project. Use the following example to add support for both Linux and Windows, using the target syntax.
[project]\nplatforms = [\"linux-64\", \"win-64\"]\n\n[activation]\nscripts = [\"install/setup.sh\"]\n[target.win-64.activation]\nscripts = [\"install/setup.bat\"]\n
Now you can run your custom node with the following command
pixi run ros2 run my_package my_node\n
"},{"location":"tutorials/ros2/#simplify-the-user-experience","title":"Simplify the user experience","text":"In pixi
we have a feature called tasks
, this allows you to define a task in your manifest file and run it with a simple command. Let's add a task to run the turtlesim
example and the custom node.
pixi task add sim \"ros2 run turtlesim turtlesim_node\"\npixi task add build \"colcon build --symlink-install\"\npixi task add hello \"ros2 run my_package my_node\"\n
Now you can run these task by simply running
pixi run sim\npixi run build\npixi run hello\n
Advanced task usage Tasks are a powerful feature in pixi.
depends-on
to the tasks to create a task chain.cwd
to the tasks to run the task in a different directory from the root of the project.inputs
and outputs
to the tasks to create a task that only runs when the inputs are changed.target
syntax to run specific tasks on specific machines.[tasks]\nsim = \"ros2 run turtlesim turtlesim_node\"\nbuild = {cmd = \"colcon build --symlink-install\", inputs = [\"src\"]}\nhello = { cmd = \"ros2 run my_package my_node\", depends-on = [\"build\"] }\n
"},{"location":"tutorials/ros2/#build-a-c-node","title":"Build a C++ node","text":"To build a C++ node you need to add the ament_cmake
and some other build dependencies to your manifest file.
pixi add ros-humble-ament-cmake-auto compilers pkg-config cmake ninja\n
Now you can create a C++ node with the following command
pixi run ros2 pkg create --build-type ament_cmake --destination-directory src --node-name my_cpp_node my_cpp_package\n
Now you can build it again and run it with the following commands
# Passing arguments to the build command to build with Ninja, add them to the manifest if you want to default to ninja.\npixi run build --cmake-args -G Ninja\npixi run ros2 run my_cpp_package my_cpp_node\n
Tip Add the cpp task to the manifest file to simplify the user experience.
pixi task add hello-cpp \"ros2 run my_cpp_package my_cpp_node\"\n
"},{"location":"tutorials/ros2/#conclusion","title":"Conclusion","text":"In this tutorial, we showed you how to create a Python & CMake ROS2 project using pixi
. We also showed you how to add dependencies to your project using pixi
, and how to run your project using pixi run
. This way you can make sure that your project is reproducible on all your machines that have pixi
installed.
Finished with your project? We'd love to see what you've created! Share your work on social media using the hashtag #pixi and tag us @prefix_dev. Let's inspire the community together!
"},{"location":"tutorials/ros2/#frequently-asked-questions","title":"Frequently asked questions","text":""},{"location":"tutorials/ros2/#what-happens-with-rosdep","title":"What happens withrosdep
?","text":"Currently, we don't support rosdep
in a pixi environment, so you'll have to add the packages using pixi add
. rosdep
will call conda install
which isn't supported in a pixi environment.
pixi
","text":"In this tutorial, we will show you how to develop a Rust package using pixi
. The tutorial is written to be executed from top to bottom, missing steps might result in errors.
The audience for this tutorial is developers who are familiar with Rust and cargo
and how are interested to try pixi for their development workflow. The benefit would be within a rust workflow that you lock both rust and the C/System dependencies your project might be using. E.g tokio users will almost most definitely use openssl
.
If you're new to pixi, you can check out the basic usage guide. This will teach you the basics of pixi project within 3 minutes.
"},{"location":"tutorials/rust/#prerequisites","title":"Prerequisites","text":"pixi
installed. If you haven't installed it yet, you can follow the instructions in the installation guide. The crux of this tutorial is to show you only need pixi!pixi init my_rust_project\ncd my_rust_project\n
It should have created a directory structure like this:
my_rust_project\n\u251c\u2500\u2500 .gitattributes\n\u251c\u2500\u2500 .gitignore\n\u2514\u2500\u2500 pixi.toml\n
The pixi.toml
file is the manifest file for your project. It should look like this:
[project]\nname = \"my_rust_project\"\nversion = \"0.1.0\"\ndescription = \"Add a short description here\"\nauthors = [\"User Name <user.name@email.url>\"]\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\"] # (1)!\n\n[tasks]\n\n[dependencies]\n
platforms
is set to your system's platform by default. You can change it to any platform you want to support. e.g. [\"linux-64\", \"osx-64\", \"osx-arm64\", \"win-64\"]
.To use a pixi project you don't need any dependencies on your system, all the dependencies you need should be added through pixi, so other users can use your project without any issues.
pixi add rust\n
This will add the rust
package to your pixi.toml
file under [dependencies]
. Which includes the rust
toolchain, and cargo
.
cargo
project","text":"Now that you have rust installed, you can create a cargo
project in your pixi
project.
pixi run cargo init\n
pixi run
is pixi's way to run commands in the pixi
environment, it will make sure that the environment is set up correctly for the command to run. It runs its own cross-platform shell, if you want more information checkout the tasks
documentation. You can also activate the environment in your own shell by running pixi shell
, after that you don't need pixi run ...
anymore.
Now we can build a cargo
project using pixi
.
pixi run cargo build\n
To simplify the build process, you can add a build
task to your pixi.toml
file using the following command: pixi task add build \"cargo build\"\n
Which creates this field in the pixi.toml
file: pixi.toml[tasks]\nbuild = \"cargo build\"\n
And now you can build your project using:
pixi run build\n
You can also run your project using:
pixi run cargo run\n
Which you can simplify with a task again. pixi task add start \"cargo run\"\n
So you should get the following output:
pixi run start\nHello, world!\n
Congratulations, you have a Rust project running on your machine with pixi!
"},{"location":"tutorials/rust/#next-steps-why-is-this-useful-when-there-is-rustup","title":"Next steps, why is this useful when there isrustup
?","text":"Cargo is not a binary package manager, but a source-based package manager. This means that you need to have the Rust compiler installed on your system to use it. And possibly other dependencies that are not included in the cargo
package manager. For example, you might need to install openssl
or libssl-dev
on your system to build a package. This is the case for pixi
as well, but pixi
will install these dependencies in your project folder, so you don't have to worry about them.
Add the following dependencies to your cargo project:
pixi run cargo add git2\n
If your system is not preconfigured to build C and have the libssl-dev
package installed you will not be able to build the project:
pixi run build\n...\nCould not find directory of OpenSSL installation, and this `-sys` crate cannot\nproceed without this knowledge. If OpenSSL is installed and this crate had\ntrouble finding it, you can set the `OPENSSL_DIR` environment variable for the\ncompilation process.\n\nMake sure you also have the development packages of openssl installed.\nFor example, `libssl-dev` on Ubuntu or `openssl-devel` on Fedora.\n\nIf you're in a situation where you think the directory *should* be found\nautomatically, please open a bug at https://github.com/sfackler/rust-openssl\nand include information about your system as well as this message.\n\n$HOST = x86_64-unknown-linux-gnu\n$TARGET = x86_64-unknown-linux-gnu\nopenssl-sys = 0.9.102\n\n\nIt looks like you're compiling on Linux and also targeting Linux. Currently this\nrequires the `pkg-config` utility to find OpenSSL but unfortunately `pkg-config`\ncould not be found. If you have OpenSSL installed you can likely fix this by\ninstalling `pkg-config`.\n...\n
You can fix this, by adding the necessary dependencies for building git2, with pixi: pixi add openssl pkg-config compilers\n
Now you should be able to build your project again:
pixi run build\n...\n Compiling git2 v0.18.3\n Compiling my_rust_project v0.1.0 (/my_rust_project)\n Finished dev [unoptimized + debuginfo] target(s) in 7.44s\n Running `target/debug/my_rust_project`\n
"},{"location":"tutorials/rust/#extra-add-more-tasks","title":"Extra: Add more tasks","text":"You can add more tasks to your pixi.toml
file to simplify your workflow.
For example, you can add a test
task to run your tests:
pixi task add test \"cargo test\"\n
And you can add a clean
task to clean your project:
pixi task add clean \"cargo clean\"\n
You can add a formatting task to your project:
pixi task add fmt \"cargo fmt\"\n
You can extend these tasks to run multiple commands with the use of the depends-on
field.
pixi task add lint \"cargo clippy\" --depends-on fmt\n
"},{"location":"tutorials/rust/#conclusion","title":"Conclusion","text":"In this tutorial, we showed you how to create a Rust project using pixi
. We also showed you how to add dependencies to your project using pixi
. This way you can make sure that your project is reproducible on any system that has pixi
installed.
Finished with your project? We'd love to see what you've created! Share your work on social media using the hashtag #pixi and tag us @prefix_dev. Let's inspire the community together!
"},{"location":"CHANGELOG/","title":"Changelog","text":"All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
"},{"location":"CHANGELOG/#0300-2024-09-19","title":"[0.30.0] - 2024-09-19","text":""},{"location":"CHANGELOG/#highlights","title":"\u2728 Highlights","text":"I want to thank @synapticarbors and @abkfenris for starting the work on pixi project export
. Pixi now supports the export of a conda environment.yml
file and a conda explicit specification file. This is a great addition to the project and will help users to share their projects with other non pixi users.
pixi search
by @Hofer-Julian in #2018environment.yml
by @abkfenris in #2003system-requirements
information by @ruben-arts in #2079conda-pypi-map
for feature channels by @nichmor in #2038subdirectory
in pypi url by @ruben-arts in #2065pyproject.toml
by @ruben-arts in #2075strip_channel_alias
from rattler by @Hofer-Julian in #2017psutils
by @Hofer-Julian in #2083pixi init
for pyproject.toml
by @Hofer-Julian in #1947init
add dependencies independent of target and don't install by @ruben-arts in #1916keyrings.artifacts
to the list of project built with pixi
by @jslorrma in #1908INIT_CWD
to activated env pixi run
by @ruben-arts in #1798__linux
default to 4.18
by @ruben-arts in #1887pixi global
by @Hofer-Julian in #1800pixi global
proposal by @Hofer-Julian in #1861channels
required in pixi global
manifest by @Hofer-Julian in #1868rlimit
by @baszalmstra in #1766NamedChannelOrUrl
by @ruben-arts in #1820pixi search
by @baszalmstra in #1849pixi tree -i
duplicate output by @baszalmstra in #1847find-links
with manifest-path by @baszalmstra in #1864.pixi
folder by @baszalmstra in #1866pub(crate) fn
in order to detect and remove unused functions by @Hofer-Julian in #1805TaskNode::full_command
for tests by @Hofer-Julian in #1809Default
for more structs by @Hofer-Julian in #1824get_up_to_date_prefix
to update_prefix
by @Hofer-Julian in #1837HasSpecs
implementation more functional by @Hofer-Julian in #1863init
by @ruben-arts in #1775pixi_spec
crate by @baszalmstra in #1741This release contains a lot of refactoring and improvements to the codebase, in preparation for future features and improvements. Including with that we've fixed a ton of bugs. To make sure we're not breaking anything we've added a lot of tests and CI checks. But let us know if you find any issues!
As a reminder, you can update pixi using pixi self-update
and move to a specific version, including backwards, with pixi self-update --version 0.27.0
.
pixi run
completion for fish
shell by @dennis-wey in #1680pixi init
create hatchling pyproject.toml by @Hofer-Julian in #1693[project]
table optional for pyproject.toml
manifests by @olivier-lacroix in #1732fish
completions location by @tdejager in #1647hatchling
by @Hofer-Julianupdate
command exist by @olivier-lacroix in #1690pixi exec
in GHA docs by @pavelzw in #1724hatchling
is used everywhere in documentation by @olivier-lacroix in #1733pep440_rs
from crates.io and use replace by @baszalmstra in #1698pixi add
with more than just package name and version by @ruben-arts in #1704--no-lockfile-update
by @ruben-arts in #1683pixi.toml
when pyproject.toml
is available. by @ruben-arts in #1640macos-13
by @ruben-arts in #1739activation.env
vars are by @ruben-arts in #1740pixi_manifest
by @baszalmstra in #1656pixi::consts
and pixi::config
into separate crates by @tdejager in #1684pixi_manifest
by @tdejager in #1700HasFeatures
by @tdejager in #1712HasFeatures
trait by @tdejager in #1717utils
by @tdejager in #1718fancy
to its own crate by @tdejager in #1722config
to repodata functions by @tdejager in #1723pypi-mapping
to its own crate by @tdejager in #1725utils
into 2 crates by @tdejager in #1736pixi_manifest
lib by @tdejager in #1661pinning-strategy
in the config. e.g. semver
-> >=1.2.3,<2
and no-pin
-> *
) #1516channel-priority
in the manifest. #1631pinning-strategy
to the configuration by @ruben-arts in #1516channel-priority
to the manifest and solve by @ruben-arts in #1631nushell
completion by @Hofer-Julian in #1599nushell
completions for pixi run
by @Hofer-Julian in #1627pixi run --environment
for nushell by @Hofer-Julian in #1636pyproject.toml
parser by @nichmor in #1592pixi global install
by @ruben-arts in #1626cargo add
by @Hofer-Julian in #1600cargo add
\" by @Hofer-Julian in #1605poetry
and conda
by @ruben-arts in #1624clap_complete_nushell
to dependencies by @Hofer-Julian in #1625stdout
for machine readable output by @Hofer-Julian in #1639pixi exec
command, execute commands in temporary environments, useful for testing in short-lived sessions.system-requirements
table, this is explained heregeos-rs
by @Hofer-Julian in #1563pixi shell
by @ruben-arts in #1507unix
machines, using pixi run --clean-env TASK_NAME
.pixi clean
or the cache with pixi clean cache
pixi clean
command by @ruben-arts in #1325--clean-env
flag to tasks and run command by @ruben-arts in #1395description
field to task
by @jjjermiah in #1479list_global_packages
by @dhirschfeld in #1458pixi info
by @ruben-arts in #1459--frozen
by @ruben-arts in #1468pixi install --all
output missing newline by @vigneshmanick in #1487Full commit history
"},{"location":"CHANGELOG/#0230-2024-05-27","title":"[0.23.0] - 2024-05-27","text":""},{"location":"CHANGELOG/#highlights_7","title":"\u2728 Highlights","text":"pixi config
and pixi update
pixi config
allows you to edit
, set
, unset
, append
, prepend
and list
your local/global or system configuration.pixi update
re-solves the full lockfile or use pixi update PACKAGE
to only update PACKAGE
, making sure your project is using the latest versions that the manifest allows for.pixi config
command by @chawyehsu in #1339pixi list --explicit
flag command by @jjjermiah in #1403[activation.env]
table for environment variables by @ruben-arts in #1156--all
at once by @tdejager in #1413pixi update
command to re-solve the lockfile by @baszalmstra in #1431 (fixes 20 :thumbsup:)detached-environments
to the config, move environments outside the project folder by @ruben-arts in #1381 (fixes 11 :thumbsup:)remove
arguments with add
by @olivier-lacroix in #1406--no-lockfile-update
. by @tobiasraabe in #1396Full commit history
"},{"location":"CHANGELOG/#0220-2024-05-13","title":"[0.22.0] - 2024-05-13","text":""},{"location":"CHANGELOG/#highlights_8","title":"\u2728 Highlights","text":"pixi add --pypi 'package @ package.whl'
, perfect for adding just build wheels to your environment in CI.pixi add --pypi 'package_from_git @ git+https://github.com/org/package.git'
, to add a package from a git repository.pixi add --pypi 'package_from_path @ file:///path/to/package' --editable
, to add a package from a local path.pixi add --pypi
by @wolfv in #1244install
cli doc by @vigneshmanick in #1336pixi project help
by @notPlancha in #1358pypi
dependencies. by @ruben-arts in #1366Full commit history
"},{"location":"CHANGELOG/#0211-2024-05-07","title":"[0.21.1] - 2024-05-07","text":""},{"location":"CHANGELOG/#fixed_15","title":"Fixed","text":"Full commit history
"},{"location":"CHANGELOG/#0210-2024-05-06","title":"[0.21.0] - 2024-05-06","text":""},{"location":"CHANGELOG/#highlights_9","title":"\u2728 Highlights","text":"osx-64
on osx-arm64
and wasm
environments.no-default-feature
option to simplify usage of environments.osx-64
on osx-arm64
and wasm
environments by @wolfv in #1020no-default-feature
option to environments by @olivier-lacroix in #1092/etc/pixi/config.toml
to global configuration search paths by @pavelzw in #1304task list
by @Hoxbro in #1286depends_on
to depends-on
by @ruben-arts in #1310pixi q
instead of only name by @ruben-arts in #1314rattler
by @baszalmstra in #1327Full commit history
"},{"location":"CHANGELOG/#0201-2024-04-26","title":"[0.20.1] - 2024-04-26","text":""},{"location":"CHANGELOG/#highlights_10","title":"\u2728 Highlights","text":"schema.json
normalization, add to docs by @bollwyvl in #1265Full commit history
"},{"location":"CHANGELOG/#0200-2024-04-19","title":"[0.20.0] - 2024-04-19","text":""},{"location":"CHANGELOG/#highlights_11","title":"\u2728 Highlights","text":"env
variables in the task
definition, these can also be used as default values for parameters in your task which you can overwrite with your shell's env variables. e.g. task = { cmd = \"task to run\", env = { VAR=\"value1\", PATH=\"my/path:$PATH\" } }
env
to the tasks to specify tasks specific environment variables by @wolfv in https://github.com/prefix-dev/pixi/pull/972--pyproject
option to pixi init
with a pyproject.toml by @olivier-lacroix in #1188pixi.lock
by @ruben-arts in #1209priority
definition by @ruben-arts in #1234--no-deps
when pip installing in editable mode by @glemaitre in #1220_
with -
when creating environments from features by @wolfv in #1203task = { cmd = \"task to run\", cwd = \"folder\", inputs = \"input.txt\", output = \"output.txt\"}
Where input.txt
and output.txt
where previously in folder
they are now relative the project root. This changed in: #1202task = { cmd = \"task to run\", inputs = \"input.txt\"}
previously searched for all input.txt
files now only for the ones in the project root. This changed in: #1204Full commit history
"},{"location":"CHANGELOG/#0191-2024-04-11","title":"[0.19.1] - 2024-04-11","text":""},{"location":"CHANGELOG/#highlights_12","title":"\u2728 Highlights","text":"This fixes the issue where pixi would generate broken environments/lockfiles when a mapping for a brand-new version of a package is missing.
"},{"location":"CHANGELOG/#changed_12","title":"Changed","text":"Full commit history
"},{"location":"CHANGELOG/#0190-2024-04-10","title":"[0.19.0] - 2024-04-10","text":""},{"location":"CHANGELOG/#highlights_13","title":"\u2728 Highlights","text":"pixi tree
command to show the dependency tree of the project.pixi tree
command to show dependency tree by @abkfenris in #1069pixi add --feature test --pypi package
) by @ruben-arts in #1135--no-progress
to disable all progress bars by @baszalmstra in #1105pixi add conda-forge::rattler-build
) by @baszalmstra in #1079tool.pixi.project.name
from project.name
by @olivier-lacroix in #1112features
and environments
from extras by @olivier-lacroix in #1077PIXI_ARCH
for pixi installation by @beenje in #1129tree
and list
commands by @ruben-arts in #1145conda-meta/history
to prevent conda.history.History.parse()
error by @jaimergp in #1117pyproject.toml
by @tdejager in #1121Full commit history
"},{"location":"CHANGELOG/#0180-2024-04-02","title":"[0.18.0] - 2024-04-02","text":""},{"location":"CHANGELOG/#highlights_14","title":"\u2728 Highlights","text":"pyproject.toml
, now pixi reads from the [tool.pixi]
table.git
, path
, and url
dependencies.[!TIP] These new features are part of the ongoing effort to make pixi more flexible, powerful, and comfortable for the python users. They are still in progress so expect more improvements on these features soon, so please report any issues you encounter and follow our next releases!
"},{"location":"CHANGELOG/#added_11","title":"Added","text":"pyproject.toml
by @olivier-lacroix in #999XDG_CONFIG_HOME
and XDG_CACHE_HOME
compliance by @chawyehsu in #1050zsh
may be used for installation on macOS by @pya in #1091pixi auth
documentation by @ytausch in #1076rstudio
to the IDE integration docs by @wolfv in #1144Full commit history
"},{"location":"CHANGELOG/#0171-2024-03-21","title":"[0.17.1] - 2024-03-21","text":""},{"location":"CHANGELOG/#highlights_15","title":"\u2728 Highlights","text":"A quick bug-fix release for pixi list
.
pixi list
by @baszalmstra in #1033pixi global
commands, thanks to @chawyehsu!task
execution thanks to caching \ud83d\ude80 Tasks that already executed successfully can be skipped based on the hash of the inputs
and outputs
.inputs
and outputs
hash based task skipping by @wolfv in #933pixi search
with platform selection and making limit optional by @wolfv in #979watch_file
in direnv usage by @pavelzw in #983linenums
to avoid buggy visualization by @ruben-arts in #1002install.sh
in Git Bash by @jdblischak in #966json
entries by @wolfv in #971tool
to strict json schema by @ruben-arts in #969Full commit history
"},{"location":"CHANGELOG/#0161-2024-03-11","title":"[0.16.1] - 2024-03-11","text":""},{"location":"CHANGELOG/#fixed_23","title":"Fixed","text":"0.16.0
by @ruben-arts in #951Full commit history
"},{"location":"CHANGELOG/#0160-2024-03-09","title":"[0.16.0] - 2024-03-09","text":""},{"location":"CHANGELOG/#highlights_17","title":"\u2728 Highlights","text":"rip
and add uv
as the PyPI resolver and installer.Full Commit history
"},{"location":"CHANGELOG/#0152-2024-02-29","title":"[0.15.2] - 2024-02-29","text":""},{"location":"CHANGELOG/#changed_17","title":"Changed","text":"v0.19.0
by @AliPiccioniQC in #885pixi run
if platform is not supported by @ruben-arts in #878Full commit history
"},{"location":"CHANGELOG/#0151-2024-02-26","title":"[0.15.1] - 2024-02-26","text":""},{"location":"CHANGELOG/#added_14","title":"Added","text":"pixi global list
display format by @chawyehsu in #723init --import
by @ruben-arts in #855Full commit history
"},{"location":"CHANGELOG/#0150-2024-02-23","title":"[0.15.0] - 2024-02-23","text":""},{"location":"CHANGELOG/#highlights_18","title":"\u2728 Highlights","text":"[pypi-dependencies]
now get build in the created environment so it uses the conda installed build tools.pixi init --import env.yml
to import an existing conda environment file.[target.unix.dependencies]
to specify dependencies for unix systems instead of per platform.[!WARNING] This versions build failed, use v0.15.1
--feature
to pixi add
(#803)PIXI_NO_PATH_UPDATE
variable (#822)mike
to the documentation and update looks (#809)self-update
(#823)jlap
for now (#836)Full commit history
"},{"location":"CHANGELOG/#0140-2024-02-15","title":"[0.14.0] - 2024-02-15","text":""},{"location":"CHANGELOG/#highlights_19","title":"\u2728 Highlights","text":"Now, solve-groups
can be used in [environments]
to ensure dependency alignment across different environments without simultaneous installation. This feature is particularly beneficial for managing identical dependencies in test
and production
environments. Example configuration:
[environments]\ntest = { features = [\"prod\", \"test\"], solve-groups = [\"group1\"] }\nprod = { features = [\"prod\"], solve-groups = [\"group1\"] }\n
This setup simplifies managing dependencies that must be consistent across test
and production
."},{"location":"CHANGELOG/#added_16","title":"Added","text":"-f
/--feature
to the pixi project platform
command by @ruben-arts in #785pixi list
by @ruben-arts in #775shell-hook
by @ruben-arts in https://github.com/prefix-dev/pixi/pull/811sdist
with direct references by @nichmor in https://github.com/prefix-dev/pixi/pull/813environments
by @ruben-arts in #790Full commit history
"},{"location":"CHANGELOG/#0130-2024-02-01","title":"[0.13.0] - 2024-02-01","text":""},{"location":"CHANGELOG/#highlights_20","title":"\u2728 Highlights","text":"This release is pretty crazy in amount of features! The major ones are: - We added support for multiple environments. :tada: Checkout the documentation - We added support for sdist
installation, which greatly improves the amount of packages that can be installed from PyPI. :rocket:
[!IMPORTANT]
Renaming of PIXI_PACKAGE_*
variables:
PIXI_PACKAGE_ROOT -> PIXI_PROJECT_ROOT\nPIXI_PACKAGE_NAME -> PIXI_PROJECT_NAME\nPIXI_PACKAGE_MANIFEST -> PIXI_PROJECT_MANIFEST\nPIXI_PACKAGE_VERSION -> PIXI_PROJECT_VERSION\nPIXI_PACKAGE_PLATFORMS -> PIXI_ENVIRONMENT_PLATFORMS\n
Check documentation here: https://pixi.sh/environment/ [!IMPORTANT]
The .pixi/env/
folder has been moved to accommodate multiple environments. If you only have one environment it is now named .pixi/envs/default
.
polarify
use-case as an example by @ruben-arts in #735pixi info -e/--environment
option by @ruben-arts in #676pixi channel add -f/--feature
option by @ruben-arts in #700pixi channel remove -f/--feature
option by @ruben-arts in #706pixi remove -f/--feature
option by @ruben-arts in #680pixi task list -e/--environment
option by @ruben-arts in #694pixi task remove -f/--feature
option by @ruben-arts in #694pixi install -e/--environment
option by @ruben-arts in #722pypi-dependencies
by @tdejager in #664pypi-dependencies
by @tdejager in #716pixi list
command by @hadim in #665pixi shell-hook
command by @orhun in #672#679 #684pixi self-update
by @hadim in #675PIXI_NO_PATH_UPDATE
for PATH update suppression by @chawyehsu in #692PyPiRequirement
by @orhun in #744tabwriter
instead of comfy_table
by @baszalmstra in #745[ or ]
) by @JafarAbdi in #677__pycache__
removal issues by @wolfv in #573pixi search
result correct by @chawyehsu in #713pixi info
by @ruben-arts in #728Full commit history
"},{"location":"CHANGELOG/#0120-2024-01-15","title":"[0.12.0] - 2024-01-15","text":""},{"location":"CHANGELOG/#highlights_21","title":"\u2728 Highlights","text":"pixi global upgrade
, pixi project version
commands, a PIXI_HOME
variable.pixi.toml
file already.global upgrade
command to pixi by @trueleo in #614PIXI_HOME
by @chawyehsu in #627--pypi
option to pixi remove
by @marcelotrevisani in https://github.com/prefix-dev/pixi/pull/602project version {major,minor,patch}
CLIs by @hadim in https://github.com/prefix-dev/pixi/pull/633Project
to Environment
by @baszalmstra in #630system-requirements
from Environment by @baszalmstra in #632activation.scripts
into Environment by @baszalmstra in #659pypi-dependencies
from Environment by @baszalmstra in https://github.com/prefix-dev/pixi/pull/656features
and environments
by @ruben-arts in https://github.com/prefix-dev/pixi/pull/636windows
and unix
system requirements by @baszalmstra in https://github.com/prefix-dev/pixi/pull/635CODE_OF_CONDUCT.md
by @ruben-arts in https://github.com/prefix-dev/pixi/pull/648Full Changelog: https://github.com/prefix-dev/pixi/compare/v0.11.0...v0.12.0
"},{"location":"CHANGELOG/#0111-2024-01-06","title":"[0.11.1] - 2024-01-06","text":""},{"location":"CHANGELOG/#fixed_31","title":"Fixed","text":"pixi auth
in #642sdist
and multi environment featurepixi
improve!pixi project {version|channel|platform|description}
by @hadim in #579winget-releaser
gets correct identifier by @ruben-arts in #561system-requirements
by @ruben-arts in #595Full Changelog: https://github.com/prefix-dev/pixi/compare/v0.10.0...v0.11.0
"},{"location":"CHANGELOG/#0100-2023-12-8","title":"[0.10.0] - 2023-12-8","text":""},{"location":"CHANGELOG/#highlights_23","title":"Highlights","text":"pypi-dependencies
support, now install even more of the pypi packages.pixi add --pypi
command to add a pypi package to your project.>=1.2.3, <1.3
) when adding requirement, instead of 1.2.3.*
by @baszalmstra in https://github.com/prefix-dev/pixi/pull/536rip
to fix by @tdejager in https://github.com/prefix-dev/pixi/pull/543.pyc
) support by @baszalmstra.data
directory headers
by @baszalmstrapixi add --pypi
command by @ruben-arts in https://github.com/prefix-dev/pixi/pull/539build
and host
specs while getting the best version by @ruben-arts in https://github.com/prefix-dev/pixi/pull/538winget
releaser by @ruben-arts in https://github.com/prefix-dev/pixi/pull/547rerun-sdk
example, force driven graph of pixi.lock
by @ruben-arts in https://github.com/prefix-dev/pixi/pull/548Full Changelog: https://github.com/prefix-dev/pixi/compare/v0.9.1...v0.10.0
"},{"location":"CHANGELOG/#091-2023-11-29","title":"[0.9.1] - 2023-11-29","text":""},{"location":"CHANGELOG/#highlights_24","title":"Highlights","text":"scripts
are now fixed. For example: https://github.com/prefix-dev/pixi/issues/516rip
to add scripts by @baszalmstra in https://github.com/prefix-dev/pixi/pull/517Full Changelog: https://github.com/prefix-dev/pixi/compare/v0.9.0...v0.9.1
"},{"location":"CHANGELOG/#090-2023-11-28","title":"[0.9.0] - 2023-11-28","text":""},{"location":"CHANGELOG/#highlights_25","title":"Highlights","text":"pixi remove
, pixi rm
to remove a package from the environmentpip install -e
issue that was created by release v0.8.0
: https://github.com/prefix-dev/pixi/issues/507pixi remove
command by @Wackyator in https://github.com/prefix-dev/pixi/pull/483[pypi-dependencies]
@baszalmstra in https://github.com/prefix-dev/pixi/pull/508Full Changelog: https://github.com/prefix-dev/pixi/compare/v0.8.0...v0.9.0
"},{"location":"CHANGELOG/#080-2023-11-27","title":"[0.8.0] - 2023-11-27","text":""},{"location":"CHANGELOG/#highlights_26","title":"Highlights","text":"[pypi-dependencies]
ALPHA RELEASE\ud83d\udc0d\ud83c\udf89, you can now add PyPI dependencies to your pixi project.pixi run
has been improved with better errors and showing what task is run.[!NOTE] [pypi-dependencies]
support is still incomplete, missing functionality is listed here: https://github.com/orgs/prefix-dev/projects/6. Our intent is not to have 100% feature parity with pip
, our goal is that you only need pixi
for both conda and pypi packages alike.
rattler
@ruben-arts in https://github.com/prefix-dev/pixi/pull/496pypi-dependencies
by @baszalmstra in https://github.com/prefix-dev/pixi/pull/494command not found
is returned by @ruben-arts in https://github.com/prefix-dev/pixi/pull/488pixi.sh
by @ruben-arts in https://github.com/prefix-dev/pixi/pull/458 && https://github.com/prefix-dev/pixi/pull/459 && https://github.com/prefix-dev/pixi/pull/460RECORD not found
issue by @baszalmstra in https://github.com/prefix-dev/pixi/pull/495.gitignore
and give better errors by @ruben-arts in https://github.com/prefix-dev/pixi/pull/490pypi-dependencies
by @baszalmstra in https://github.com/prefix-dev/pixi/pull/478pypi-dependencies
type by @ruben-arts in https://github.com/prefix-dev/pixi/pull/471pypi-dependencies
parsing errors by @ruben-arts in https://github.com/prefix-dev/pixi/pull/479ctypes
by @liquidcarbon in https://github.com/prefix-dev/pixi/pull/441rerun
example by @ruben-arts in https://github.com/prefix-dev/pixi/pull/489pypi-dependencies
by @ruben-arts in https://github.com/prefix-dev/pixi/pull/481Full Changelog: https://github.com/prefix-dev/pixi/compare/v0.7.0...v0.8.0
"},{"location":"CHANGELOG/#070-2023-11-14","title":"[0.7.0] - 2023-11-14","text":""},{"location":"CHANGELOG/#highlights_27","title":"Highlights","text":"channels = [\"conda-forge\", \"pytorch\"]
All packages found in conda-forge will not be taken from pytorch.pytorch = { version=\"*\", channel=\"pytorch\"}
pixi run <TABTAB>
pixi run docs
!pixi run
for bash
and zsh
by @ruben-arts in https://github.com/prefix-dev/pixi/pull/390python = { version = \"*\" channel=\"conda-forge\" }
by @ruben-arts in https://github.com/prefix-dev/pixi/pull/439project.version
as optional field in the pixi.toml
by @ruben-arts in https://github.com/prefix-dev/pixi/pull/400pixi.toml
to help users find errors by @ruben-arts in https://github.com/prefix-dev/pixi/pull/396install.sh
to create dot file if not present by @humphd in https://github.com/prefix-dev/pixi/pull/408task list
by @ruben-arts in https://github.com/prefix-dev/pixi/pull/431global install
path on windows by @ruben-arts in https://github.com/prefix-dev/pixi/pull/449PIXI_BIN_PATH
use backslashes by @Hofer-Julian in https://github.com/prefix-dev/pixi/pull/442mkdocs
with all documentation by @ruben-arts in https://github.com/prefix-dev/pixi/pull/435Full Changelog: https://github.com/prefix-dev/pixi/compare/v0.6.0...v0.7.0
"},{"location":"CHANGELOG/#060-2023-10-17","title":"[0.6.0] - 2023-10-17","text":""},{"location":"CHANGELOG/#highlights_28","title":"Highlights","text":"This release fixes some bugs and adds the --cwd
option to the tasks.
--frozen
logic to error when there is no lockfile by @ruben-arts in https://github.com/prefix-dev/pixi/pull/373rerun
example to v0.9.1 by @ruben-arts in https://github.com/prefix-dev/pixi/pull/389--cwd
) in pixi tasks
by @ruben-arts in https://github.com/prefix-dev/pixi/pull/380Full Changelog: https://github.com/prefix-dev/pixi/compare/v0.5.0...v0.6.0
"},{"location":"CHANGELOG/#050-2023-10-03","title":"[0.5.0] - 2023-10-03","text":""},{"location":"CHANGELOG/#highlights_29","title":"Highlights","text":"We rebuilt pixi shell
, fixing the fact that your rc
file would overrule the environment activation.
shell
works and make activation more robust by @wolfv in https://github.com/prefix-dev/pixi/pull/316.gitignore
and .gitattributes
files by @ruben-arts in https://github.com/prefix-dev/pixi/pull/359--locked
and --frozen
to getting an up-to-date prefix by @ruben-arts in https://github.com/prefix-dev/pixi/pull/363pixi
by @ruben-arts in https://github.com/prefix-dev/pixi/pull/353 & https://github.com/prefix-dev/pixi/pull/365cargo upgrade --all --incompatible
by @wolfv in https://github.com/prefix-dev/pixi/pull/358Full Changelog: https://github.com/prefix-dev/pixi/compare/v0.4.0...v0.5.0
"},{"location":"CHANGELOG/#040-2023-09-22","title":"[0.4.0] - 2023-09-22","text":""},{"location":"CHANGELOG/#highlights_30","title":"Highlights","text":"This release adds the start of a new cli command pixi project
which will allow users to interact with the project configuration from the command line.
0.9.0
by @ruben-arts in https://github.com/prefix-dev/pixi/pull/350xtsci-dist
to Community.md by @HaoZeke in https://github.com/prefix-dev/pixi/pull/339ribasim
to Community.md by @Hofer-Julian in https://github.com/prefix-dev/pixi/pull/340LFortran
to Community.md by @wolfv in https://github.com/prefix-dev/pixi/pull/341pixi project channel add
subcommand by @baszalmstra and @ruben-arts in https://github.com/prefix-dev/pixi/pull/347Full Changelog: https://github.com/prefix-dev/pixi/compare/v0.3.0...v0.4.0
"},{"location":"CHANGELOG/#030-2023-09-11","title":"[0.3.0] - 2023-09-11","text":""},{"location":"CHANGELOG/#highlights_31","title":"Highlights","text":"This releases fixes a lot of issues encountered by the community as well as some awesome community contributions like the addition of pixi global list
and pixi global remove
.
system-requirements
are properly filtered by platform, by @ruben-arts (#299)thread 'tokio-runtime-worker' has overflowed its stack
issue, by @baszalmstra (#28)pixi global list
and pixi global remove
commands, by @cjfuller (#318)--manifest-path
must point to a pixi.toml
file, by @baszalmstra (#324)pixi search
command to search for packages, by @Wackyator. (#244)[target.win-64.tasks]
, by @ruben-arts. (#269)pixi.lock
automatically, by @spenserblack. (#265)As this is our first Semantic Versioning release, we'll change from the prototype to the developing phase, as semver describes. A 0.x release could be anything from a new major feature to a breaking change where the 0.0.x releases will be bugfixes or small improvements.
"},{"location":"CHANGELOG/#highlights_33","title":"Highlights","text":"unix
platforms, by @baszalmstra. (#250)miette
, by @baszalmstra. (#211)aarch64-linux
, by @pavelzw. (#233)libsolv
as the default solver, by @ruben-arts. (#209)condax
in the docs, by @maresb. (#207)brew
installation instructions, by @wolfv. (#208)activation.scripts
to the pixi.toml
to configure environment activation, by @ruben-arts. (#217)pixi upload
command to upload packages to prefix.dev
, by @wolfv. (#127)pixi.toml
, by @wolfv. (#218)pixi task list
to show all tasks in the project, by @tdejager. (#228)--color
to configure the colors in the output, by @baszalmstra. (#243)pixi.toml
and .gitignore
, by @pavelzw. (#216)pixi.toml
, by @wolfv. (#220)PS1
variable when going into a pixi shell
, by @ruben-arts. (#201)pixi add
, by @baszalmstra. (#213)run
subcommand to use the deno_task_shell
for improved cross-platform functionality. More details in the Deno Task Runner documentation.info
subcommand to retrieve system-specific information understood by pixi
.[commands]
in the pixi.toml
is now called [tasks]
. (#177)pixi info
command to get more system information by @wolfv in (#158)deno_task_shell
to execute commands in pixi run
by @baszalmstra in (#173)pixi command
command to the cli by @tdejager in (#177)pixi auth
command by @wolfv in (#183)depends_on
by @tdejager in (#161)PATH
variable where it is already set by @baszalmstra in (#169)pixi run
by @tdejager in (#190)Improving the reliability is important to us, so we added an integration testing framework, we can now test as close as possible to the CLI level using cargo
.
pixi build
, allowing host-
and build-
dependencies
(#149)Fixing Windows installer build in CI. (#145)
"},{"location":"CHANGELOG/#004-2023-06-26","title":"[0.0.4] - 2023-06-26","text":""},{"location":"CHANGELOG/#highlights_37","title":"Highlights","text":"A new command, auth
which can be used to authenticate the host of the package channels. A new command, shell
which can be used to start a shell in the pixi environment of a project. A refactor of the install
command which is changed to global install
and the install
command now installs a pixi project if you run it in the directory. Platform specific dependencies using [target.linux-64.dependencies]
instead of [dependencies]
in the pixi.toml
Lots and lots of fixes and improvements to make it easier for this user, where bumping to the new version of rattler
helped a lot.
pixi.toml
issues(#111)shell
command to use the pixi environment without pixi run
. (#116)-v, -vv, -vvv
(#118)auth
command to be able to login or logout of a host like repo.prefix.dev
if you're using private channels. (#120)pixi install
moved to pixi global install
and pixi install
became the installation of a project using the pixi.toml
(#124)pixi run
uses default shell (#119)pixi add
command is fixed. (#132)