diff --git a/.gitignore b/.gitignore
index 66fd13c..d9592b4 100644
--- a/.gitignore
+++ b/.gitignore
@@ -13,3 +13,4 @@
# Dependency directories (remove the comment below to include it)
# vendor/
+.idea/
diff --git a/.travis.yml b/.travis.yml
index d6a4d97..86de2cb 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -22,9 +22,6 @@ addons:
- attr
- lsof
- gcc-mingw-w64 # for windows
-before_install:
- - source ./check-changed.sh
- - if [ $SKIP_TEST == true ]; then exit 0; fi
install: true
before_script:
- export GO111MODULE=on
diff --git a/ADOPTERS.md b/ADOPTERS.md
new file mode 100644
index 0000000..8f4d2bc
--- /dev/null
+++ b/ADOPTERS.md
@@ -0,0 +1,29 @@
+# JuiceFS Adopters
+
+For users of JuiceFS:
+
+| Company/Team | Environment | Use Cases |
+| :--- | :--- | :--- |
+| [Xiaomi](https://www.mi.com) | Production | AI |
+| [Li Auto Inc.](https://www.lixiang.com) | Production | Big Data, AI |
+| [Shopee](https://shopee.com) | Production | Big Data |
+| [Zhihu](https://www.zhihu.com) | Production | Big Data |
+| [Yaoxin Financing Re-Guarantee](https://www.yaoxinhd.com) | Production | Big Data, File Sharing |
+| [Megvii](https://megvii.com) | Production | AI |
+| [Piesat Information Technology Co., Ltd.](https://www.piesat.cn) | Production | File Sharing |
+| [Gene Way](http://www.geneway.cn) | Production | File Sharing |
+| [Dingdong Fresh](https://www.100.me) | Testing | Big Data |
+| [UniSound](https://www.unisound.com) | Testing | AI |
+| [SF-Express](https://www.sf-express.com) | Testing | Big Data, File Sharing |
+| [BIGO](https://bigo.tv) | Testing | AI |
+| [Inner Mongolia MENGSHANG Consumer Finance Co., Ltd.](https://www.mengshangxiaofei.com) | Testing | File Sharing |
+
+For the JuiceFS community ecosystem:
+
+- [Megvii](https://en.megvii.com) team contributed [Python SDK](https://github.com/megvii-research/juicefs-python).
+- [PaddlePaddle](https://github.com/paddlepaddle/paddle) team has integrated JuiceFS into [Paddle Operator](https://github.com/PaddleFlow/paddle-operator), please refer to [the document](https://github.com/PaddleFlow/paddle-operator/blob/sampleset/docs/en/ext-overview.md).
+- Build a distributed [Milvus](https://milvus.io) cluster based on JuiceFS, the Milvus team wrote a [case sharing](https://zilliz.com/blog/building-a-milvus-cluster-based-on-juicefs) and [tutorial](https://tutorials.milvus.io/en-juicefs/index.html?index=..%2F..index#0).
+- [Apache Kylin 4.0](http://kylin.apache.org) that is a OLAP engine could deploy with the JuiceFS in dissaggregated storage and compute architecture on every public cloud platform, there is [the video sharing](https://www.bilibili.com/video/BV1c54y1W72S) (in Chinese) and [the post](https://juicefs.com/blog/en/posts/optimize-kylin-on-juicefs/) for this use case.
+- [UniSound](https://www.unisound.com) team participated in the development of [Fluid](https://github.com/fluid-cloudnative/fluid) JuiceFSRuntime cache engine, please refer to [this document](https://github.com/fluid-cloudnative/fluid/blob/master/docs/en/samples/juicefs_runtime.md).
+
+You are welcome to share your experience after using JuiceFS, either by submitting a Pull Request directly to this list, or by contacting us at hello@juicedata.io.
diff --git a/ADOPTERS_CN.md b/ADOPTERS_CN.md
new file mode 100644
index 0000000..230285b
--- /dev/null
+++ b/ADOPTERS_CN.md
@@ -0,0 +1,29 @@
+# JuiceFS 使用者
+
+使用 JuiceFS 的用户:
+
+| 公司/团队 | 使用环境 | 用例 |
+| :--- | :--- | :--- |
+| [小米](https://www.mi.com) | Production | AI |
+| [理想汽车](https://www.lixiang.com) | Production | 大数据,AI |
+| [Shopee](https://shopee.com) | Production | 大数据 |
+| [知乎](https://www.zhihu.com) | Production | 大数据 |
+| [尧信](https://www.yaoxinhd.com) | Production | 大数据,共享文件存储 |
+| [旷视](https://megvii.com) | Production | AI |
+| [航天宏图](https://www.piesat.cn) | Production | 共享文件存储 |
+| [溯源精微](http://www.geneway.cn) | Production | 共享文件存储 |
+| [叮咚买菜](https://www.100.me) | Testing | 大数据 |
+| [云知声](https://www.unisound.com) | Testing | AI |
+| [顺丰速运](https://www.sf-express.com) | Testing | 大数据,共享文件存储 |
+| [BIGO](https://bigo.tv) | Testing | AI |
+| [蒙商消费金融](https://www.mengshangxiaofei.com) | Testing | 共享文件存储 |
+
+此外在社区生态方面,
+
+- [旷视科技](https://megvii.com)团队贡献了 [Python SDK](https://github.com/megvii-research/juicefs-python)。
+- [PaddlePaddle](https://github.com/paddlepaddle/paddle) 团队已将 JuiceFS 缓存加速特性集成到 [Paddle Operator](https://github.com/PaddleFlow/paddle-operator) 中,具体请参考[文档](https://github.com/PaddleFlow/paddle-operator/blob/sampleset/docs/zh_CN/ext-overview.md)。
+- 通过 JuiceFS 可以轻松搭建一个 [Milvus](https://milvus.io) 向量搜索引擎,Milvus 团队已经撰写了官方 [案例](https://zilliz.com/blog/building-a-milvus-cluster-based-on-juicefs) 与 [教程](https://tutorials.milvus.io/en-juicefs/index.html?index=..%2F..index#0)。
+- 大数据 OLAP 分析引擎 [Apache Kylin 4.0](http://kylin.apache.org) 可以使用 JuiceFS 在所有公有云上轻松部署存储计算分离架构的集群,请看 [视频分享](https://www.bilibili.com/video/BV1c54y1W72S) 和 [案例文章](https://juicefs.com/blog/cn/posts/optimize-kylin-on-juicefs/)。
+- [云知声](https://www.unisound.com) 团队参与开发 [Fluid](https://github.com/fluid-cloudnative/fluid) JuiceFSRuntime 缓存引擎,具体请参考[文档](https://github.com/fluid-cloudnative/fluid/blob/master/docs/zh/samples/juicefs_runtime.md) 。
+
+欢迎你在使用 JuiceFS 后,向大家分享你的使用经验,可以直接向这个列表提交 Pull Request,或者联系我们 hello@juicedata.io。
diff --git a/CODE_OF_CONDUCT.md b/CODE_OF_CONDUCT.md
new file mode 100644
index 0000000..e69de29
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
new file mode 100644
index 0000000..c958590
--- /dev/null
+++ b/CONTRIBUTING.md
@@ -0,0 +1,40 @@
+# Contributing to JuiceFS
+
+## Guidelines
+
+- Before starting work on a feature or bug fix, please search GitHub or reach out to us via GitHub, Slack etc. The purpose of this step is make sure no one else is already working on it and we'll ask you to open a GitHub issue if necessary.
+- We will use the GitHub issue to discuss the feature and come to agreement. This is to prevent your time being wasted, as well as ours.
+- If it is a major feature update, we highly recommend you also write a design document to help the community understand your motivation and solution.
+- A good way to find a project properly sized for a first time contributor is to search for open issues with the label ["kind/good-first-issue"](https://github.com/juicedata/juicefs/labels/kind%2Fgood-first-issue) or ["kind/help-wanted"](https://github.com/juicedata/juicefs/labels/kind%2Fhelp-wanted).
+
+## Coding Style
+
+- We're following ["Effective Go"](https://golang.org/doc/effective_go.html) and ["Go Code Review Comments"](https://github.com/golang/go/wiki/CodeReviewComments).
+- Use `go fmt` to format your code before committing. You can find information in editor support for Go tools in ["IDEs and Plugins for Go"](https://github.com/golang/go/wiki/IDEsAndTextEditorPlugins).
+- If you see any code which clearly violates the style guide, please fix it and send a pull request.
+- Every new source file must begin with a license header.
+- Install [pre-commit](https://pre-commit.com/) and use it to set up a pre-commit hook for static analysis. Just run `pre-commit install` in the root of the repo.
+
+## Sign the CLA
+
+Before you can contribute to JuiceFS, you will need to sign the [Contributor License Agreement](https://cla-assistant.io/juicedata/juicefs). There're a CLA assistant to guide you when you first time submit a pull request.
+
+## What is a Good PR
+
+- Presence of unit tests
+- Adherence to the coding style
+- Adequate in-line comments
+- Explanatory commit message
+
+## Contribution Flow
+
+This is a rough outline of what a contributor's workflow looks like:
+
+- Create a topic branch from where to base the contribution. This is usually `main`.
+- Make commits of logical units.
+- Make sure commit messages are in the proper format.
+- Push changes in a topic branch to a personal fork of the repository.
+- Submit a pull request to [juicedata/juicefs](https://github.com/juicedata/juicefs/compare). The PR should link to one issue which either created by you or others.
+- The PR must receive approval from at least one maintainer before it be merged.
+
+Happy hacking!
diff --git a/LICENSE b/LICENSE
new file mode 100644
index 0000000..d645695
--- /dev/null
+++ b/LICENSE
@@ -0,0 +1,202 @@
+
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright [yyyy] [name of copyright owner]
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/Makefile b/Makefile
new file mode 100644
index 0000000..db46fac
--- /dev/null
+++ b/Makefile
@@ -0,0 +1,63 @@
+export GO111MODULE=on
+
+all: juicefs
+
+REVISION := $(shell git rev-parse --short HEAD 2>/dev/null)
+REVISIONDATE := $(shell git log -1 --pretty=format:'%ad' --date short 2>/dev/null)
+PKG := github.com/juicedata/juicefs/pkg/version
+LDFLAGS = -s -w
+ifneq ($(strip $(REVISION)),) # Use git clone
+ LDFLAGS += -X $(PKG).revision=$(REVISION) \
+ -X $(PKG).revisionDate=$(REVISIONDATE)
+endif
+
+SHELL = /bin/sh
+
+ifdef STATIC
+ LDFLAGS += -linkmode external -extldflags '-static'
+ CC = /usr/bin/musl-gcc
+ export CC
+endif
+
+juicefs: Makefile cmd/*.go pkg/*/*.go
+ go build -ldflags="$(LDFLAGS)" -o juicefs ./cmd
+
+juicefs.lite: Makefile cmd/*.go pkg/*/*.go
+ go build -tags nogateway,nocos,nobos,nohdfs,noibmcos,noobs,nooss,noqingstor,noscs,nosftp,noswift,noupyun,noazure,nogs,noufile,nob2,nosqlite,nomysql,nopg,notikv,nobadger \
+ -ldflags="$(LDFLAGS)" -o juicefs.lite ./cmd
+
+juicefs.ceph: Makefile cmd/*.go pkg/*/*.go
+ go build -tags ceph -ldflags="$(LDFLAGS)" -o juicefs.ceph ./cmd
+
+/usr/local/include/winfsp:
+ sudo mkdir -p /usr/local/include/winfsp
+ sudo cp hack/winfsp_headers/* /usr/local/include/winfsp
+
+juicefs.exe: /usr/local/include/winfsp cmd/*.go pkg/*/*.go
+ GOOS=windows CGO_ENABLED=1 CC=x86_64-w64-mingw32-gcc \
+ go build -ldflags="$(LDFLAGS)" -buildmode exe -o juicefs.exe ./cmd
+
+.PHONY: snapshot release test
+snapshot:
+ docker run --rm --privileged \
+ -e PRIVATE_KEY=${PRIVATE_KEY} \
+ -v ~/go/pkg/mod:/go/pkg/mod \
+ -v `pwd`:/go/src/github.com/juicedata/juicefs \
+ -v /var/run/docker.sock:/var/run/docker.sock \
+ -w /go/src/github.com/juicedata/juicefs \
+ juicedata/golang-cross:latest release --snapshot --rm-dist --skip-publish
+
+release:
+ docker run --rm --privileged \
+ -e PRIVATE_KEY=${PRIVATE_KEY} \
+ --env-file .release-env \
+ -v ~/go/pkg/mod:/go/pkg/mod \
+ -v `pwd`:/go/src/github.com/juicedata/juicefs \
+ -v /var/run/docker.sock:/var/run/docker.sock \
+ -w /go/src/github.com/juicedata/juicefs \
+ juicedata/golang-cross:latest release --rm-dist
+
+test:
+ JFS_PAGE_STACK=1 go test -v -cover ./pkg/... -coverprofile=cov1.out
+ sudo JFS_PAGE_STACK=1 JFS_GC_SKIPPEDTIME=1 `which go` test -v -cover ./cmd/... -coverprofile=cov2.out
+ sudo `which go` test ./integration/... -cover -coverprofile=cov3.out -coverpkg=./pkg/...,./cmd/...
diff --git a/README.md b/README.md
index ab7dac6..82ce9f2 100644
--- a/README.md
+++ b/README.md
@@ -1,2 +1,227 @@
-# test-ci
-test some ci problem that not easy to resolve
+
+
+
+
+
+
+
+
+**JuiceFS** is a high-performance [POSIX](https://en.wikipedia.org/wiki/POSIX) file system released under Apache License 2.0. It is specially optimized for the cloud-native environment. Using the JuiceFS to store data, the data itself will be persisted in object storage (e.g. Amazon S3), and the metadata corresponding to the data can be persisted in various database engines such as Redis, MySQL, and SQLite according to the needs of the scene.
+
+JuiceFS can simply and conveniently connect massive cloud storage directly to big data, machine learning, artificial intelligence, and various application platforms that have been put into production environment, without modifying the code, you can use massive cloud storage as efficiently as using local storage.
+
+📺 **Video**: [What is JuiceFS?](https://www.youtube.com/watch?v=8RdZoBG-D6Y)
+
+## Highlighted Features
+
+1. **Fully POSIX-compatible**: Use like a local file system, seamlessly docking with existing applications, no business intrusion.
+2. **Fully Hadoop-compatible**: JuiceFS [Hadoop Java SDK](docs/en/deployment/hadoop_java_sdk.md) is compatible with Hadoop 2.x and Hadoop 3.x. As well as variety of components in Hadoop ecosystem.
+3. **S3-compatible**: JuiceFS [S3 Gateway](docs/en/deployment/s3_gateway.md) provides S3-compatible interface.
+4. **Cloud Native**: JuiceFS provides [Kubernetes CSI driver](docs/en/deployment/how_to_use_on_kubernetes.md) to help people who want to use JuiceFS in Kubernetes.
+5. **Sharing**: JuiceFS is a shared file storage that can be read and written by thousands clients.
+6. **Strong Consistency**: The confirmed modification will be immediately visible on all servers mounted with the same file system .
+7. **Outstanding Performance**: The latency can be as low as a few milliseconds and the throughput can be expanded to nearly unlimited. [Test results](docs/en/benchmark/benchmark.md)
+8. **Data Encryption**: Supports data encryption in transit and at rest, read [the guide](docs/en/security/encrypt.md) for more information.
+9. **Global File Locks**: JuiceFS supports both BSD locks (flock) and POSIX record locks (fcntl).
+10. **Data Compression**: JuiceFS supports use [LZ4](https://lz4.github.io/lz4) or [Zstandard](https://facebook.github.io/zstd) to compress all your data.
+
+---
+
+[Architecture](#architecture) | [Getting Started](#getting-started) | [Advanced Topics](#advanced-topics) | [POSIX Compatibility](#posix-compatibility) | [Performance Benchmark](#performance-benchmark) | [Supported Object Storage](#supported-object-storage) | [Who is using](#who-is-using) | [Roadmap](#roadmap) | [Reporting Issues](#reporting-issues) | [Contributing](#contributing) | [Community](#community) | [Usage Tracking](#usage-tracking) | [License](#license) | [Credits](#credits) | [FAQ](#faq)
+
+---
+
+## Architecture
+
+JuiceFS consists of three parts:
+
+1. **JuiceFS Client**: Coordinate the implementation of object storage and metadata storage engines, as well as file system interfaces such as POSIX, Hadoop, Kubernetes, and S3 gateway.
+2. **Data Storage**: Store the data itself, support local disk and object storage.
+3. **Metadata Engine**: Metadata corresponding to the stored data, supporting multiple engines such as Redis, MySQL, and SQLite;
+
+![JuiceFS Architecture](docs/en/images/juicefs-arch-new.png)
+
+JuiceFS relies on Redis to store file system metadata. Redis is a fast, open-source, in-memory key-value data store and very suitable for storing the metadata. All the data will store into object storage through JuiceFS client. [Learn more](docs/en/introduction/architecture.md)
+
+![JuiceFS Storage Format](docs/en/images/juicefs-storage-format-new.png)
+
+Any file stored in JuiceFS will be split into fixed-size **"Chunk"**, and the default upper limit is 64 MiB. Each Chunk is composed of one or more **"Slice"**. The length of the slice is not fixed, depending on the way the file is written. Each slice will be further split into fixed-size **"Block"**, which is 4 MiB by default. Finally, these blocks will be stored in the object storage. At the same time, JuiceFS will store each file and its Chunks, Slices, Blocks and other metadata information in metadata engines. [Learn more](docs/en/reference/how_juicefs_store_files.md)
+
+![How JuiceFS stores your files](docs/en/images/how-juicefs-stores-files-new.png)
+
+Using JuiceFS, files will eventually be split into Chunks, Slices and Blocks and stored in object storage. Therefore, you will find that the source files stored in JuiceFS cannot be found in the file browser of the object storage platform. There is a chunks directory and a bunch of digitally numbered directories and files in the bucket. Don't panic, this is the secret of the high-performance operation of the JuiceFS!
+
+## Getting Started
+
+To create a JuiceFS, you need the following 3 preparations:
+
+1. Redis database for metadata storage
+2. Object storage is used to store data blocks
+3. JuiceFS Client
+
+Please refer to [Quick Start Guide](docs/en/getting-started/for_local.md) to start using JuiceFS immediately!
+
+### Command Reference
+
+There is a [command reference](docs/en/reference/command_reference.md) to see all options of the subcommand.
+
+### Kubernetes
+
+[Using JuiceFS on Kubernetes](docs/en/deployment/how_to_use_on_kubernetes.md) is so easy, have a try.
+
+### Hadoop Java SDK
+
+If you wanna use JuiceFS in Hadoop, check [Hadoop Java SDK](docs/en/deployment/hadoop_java_sdk.md).
+
+## Advanced Topics
+
+- [Redis Best Practices](docs/en/administration/metadata/redis_best_practices.md)
+- [How to Setup Object Storage](docs/en/reference/how_to_setup_object_storage.md)
+- [Cache Management](docs/en/administration/cache_management.md)
+- [Fault Diagnosis and Analysis](docs/en/administration/fault_diagnosis_and_analysis.md)
+- [FUSE Mount Options](docs/en/reference/fuse_mount_options.md)
+- [Using JuiceFS on Windows](docs/en/juicefs_on_windows.md)
+- [S3 Gateway](docs/en/deployment/s3_gateway.md)
+
+Please refer to [JuiceFS User Manual](docs/en/README.md) for more information.
+
+## POSIX Compatibility
+
+JuiceFS passed all of the 8813 tests in latest [pjdfstest](https://github.com/pjd/pjdfstest).
+
+```
+All tests successful.
+
+Test Summary Report
+-------------------
+/root/soft/pjdfstest/tests/chown/00.t (Wstat: 0 Tests: 1323 Failed: 0)
+ TODO passed: 693, 697, 708-709, 714-715, 729, 733
+Files=235, Tests=8813, 233 wallclock secs ( 2.77 usr 0.38 sys + 2.57 cusr 3.93 csys = 9.65 CPU)
+Result: PASS
+```
+
+Besides the things covered by pjdfstest, JuiceFS provides:
+
+- Close-to-open consistency. Once a file is closed, the following open and read are guaranteed see the data written before close. Within same mount point, read can see all data written before it immediately.
+- Rename and all other metadata operations are atomic guaranteed by Redis transaction.
+- Open files remain accessible after unlink from same mount point.
+- Mmap is supported (tested with FSx).
+- Fallocate with punch hole support.
+- Extended attributes (xattr).
+- BSD locks (flock).
+- POSIX record locks (fcntl).
+
+## Performance Benchmark
+
+### Basic benchmark
+
+JuiceFS provides a subcommand to run a few basic benchmarks to understand how it works in your environment:
+
+![JuiceFS Bench](docs/en/images/juicefs-bench.png)
+
+### Throughput
+
+Performed a sequential read/write benchmark on JuiceFS, [EFS](https://aws.amazon.com/efs) and [S3FS](https://github.com/s3fs-fuse/s3fs-fuse) by [fio](https://github.com/axboe/fio), here is the result:
+
+![Sequential Read Write Benchmark](docs/en/images/sequential-read-write-benchmark.svg)
+
+It shows JuiceFS can provide 10X more throughput than the other two, read [more details](docs/en/benchmark/fio.md).
+
+### Metadata IOPS
+
+Performed a simple mdtest benchmark on JuiceFS, [EFS](https://aws.amazon.com/efs) and [S3FS](https://github.com/s3fs-fuse/s3fs-fuse) by [mdtest](https://github.com/hpc/ior), here is the result:
+
+![Metadata Benchmark](docs/en/images/metadata-benchmark.svg)
+
+It shows JuiceFS can provide significantly more metadata IOPS than the other two, read [more details](docs/en/benchmark/mdtest.md).
+
+### Analyze performance
+
+There is a virtual file called `.accesslog` in the root of JuiceFS to show all the operations and the time they takes, for example:
+
+```bash
+$ cat /jfs/.accesslog
+2021.01.15 08:26:11.003330 [uid:0,gid:0,pid:4403] write (17669,8666,4993160): OK <0.000010>
+2021.01.15 08:26:11.003473 [uid:0,gid:0,pid:4403] write (17675,198,997439): OK <0.000014>
+2021.01.15 08:26:11.003616 [uid:0,gid:0,pid:4403] write (17666,390,951582): OK <0.000006>
+```
+
+The last number on each line is the time (in seconds) current operation takes. You can use this directly to debug and analyze performance issues, or try `./juicefs profile /jfs` to monitor real time statistics. Please run `./juicefs profile -h` or refer to [here](docs/en/benchmark/operations_profiling.md) to learn more about this subcommand.
+
+## Supported Object Storage
+
+- Amazon S3
+- Google Cloud Storage
+- Azure Blob Storage
+- Alibaba Cloud Object Storage Service (OSS)
+- Tencent Cloud Object Storage (COS)
+- QingStor Object Storage
+- Ceph RGW
+- MinIO
+- Local disk
+- Redis
+
+JuiceFS supports almost all object storage services. [Learn more](docs/en/reference/how_to_setup_object_storage.md#supported-object-storage).
+
+## Who is using
+
+It's considered as beta quality, the storage format is not stabilized yet. If you want to use it in a production environment, please do a careful and serious evaluation first. If you are interested in it, please test it as soon as possible and give us [feedback](https://github.com/juicedata/juicefs/discussions).
+
+You are welcome to tell us after using JuiceFS and share your experience with everyone. We have also collected a summary list in [ADOPTERS.md](ADOPTERS.md), which also includes other open source projects used with JuiceFS.
+
+## Roadmap
+
+- Stabilize storage format
+- Support FoundationDB as meta engine
+- User and group quotas
+- Directory quotas
+- Snapshot
+- Write once read many (WORM)
+
+## Reporting Issues
+
+We use [GitHub Issues](https://github.com/juicedata/juicefs/issues) to track community reported issues. You can also [contact](#community) the community for getting answers.
+
+## Contributing
+
+Thank you for your contribution! Please refer to the [CONTRIBUTING.md](CONTRIBUTING.md) for more information.
+
+## Community
+
+Welcome to join the [Discussions](https://github.com/juicedata/juicefs/discussions) and the [Slack channel](https://join.slack.com/t/juicefs/shared_invite/zt-n9h5qdxh-0bJojPaql8cfFgwerDQJgA) to connect with JuiceFS team members and other users.
+
+## Usage Tracking
+
+JuiceFS by default collects **anonymous** usage data. It only collects core metrics (e.g. version number), no user or any sensitive data will be collected. You could review related code [here](pkg/usage/usage.go).
+
+These data help us understand how the community is using this project. You could disable reporting easily by command line option `--no-usage-report`:
+
+```bash
+$ ./juicefs mount --no-usage-report
+```
+
+## License
+
+JuiceFS is open-sourced under Apache License 2.0, see [LICENSE](LICENSE).
+
+## Credits
+
+The design of JuiceFS was inspired by [Google File System](https://research.google/pubs/pub51), [HDFS](https://hadoop.apache.org) and [MooseFS](https://moosefs.com), thanks to their great work.
+
+## FAQ
+
+### Why doesn't JuiceFS support XXX object storage?
+
+JuiceFS already supported many object storage, please check [the list](docs/en/reference/how_to_setup_object_storage.md#supported-object-storage) first. If this object storage is compatible with S3, you could treat it as S3. Otherwise, try reporting issue.
+
+### Can I use Redis cluster?
+
+The simple answer is no. JuiceFS uses [transaction](https://redis.io/topics/transactions) to guarantee the atomicity of metadata operations, which is not well supported in cluster mode. Sentinal or other HA solution for Redis are needed.
+
+See ["Redis Best Practices"](docs/en/administration/metadata/redis_best_practices.md) for more information.
+
+### What's the difference between JuiceFS and XXX?
+
+See ["Comparison with Others"](docs/en/comparison) for more information.
+
+For more FAQs, please see the [full list](docs/en/faq.md).
diff --git a/README_CN.md b/README_CN.md
new file mode 100644
index 0000000..2d79e46
--- /dev/null
+++ b/README_CN.md
@@ -0,0 +1,226 @@
+
+
+
+
+
+
+
+
+
+JuiceFS 是一款高性能 [POSIX](https://en.wikipedia.org/wiki/POSIX) 文件系统,针对云原生环境特别优化设计,在 Apache 2.0 开源协议下发布。使用 JuiceFS 存储数据,数据本身会被持久化在对象存储(例如,Amazon S3),而数据所对应的元数据可以根据场景需求被持久化在 Redis、MySQL、SQLite 等多种数据库引擎中。JuiceFS 可以简单便捷的将海量云存储直接接入已投入生产环境的大数据、机器学习、人工智能以及各种应用平台,无需修改代码即可像使用本地存储一样高效使用海量云端存储。
+
+📺 **视频**: [什么是 JuiceFS?](https://www.bilibili.com/video/BV1HK4y197va/)
+
+## 核心特性
+
+1. **POSIX 兼容**:像本地文件系统一样使用,无缝对接已有应用,无业务侵入性;
+2. **HDFS 兼容**:完整兼容 [HDFS API](docs/zh_cn/deployment/hadoop_java_sdk.md),提供更强的元数据性能;
+3. **S3 兼容**:提供 [S3 Gateway](docs/zh_cn/deployment/s3_gateway.md) 实现 S3 协议兼容的访问接口;
+4. **云原生**:通过 [Kubernetes CSI driver](docs/zh_cn/deployment/how_to_use_on_kubernetes.md) 可以很便捷地在 Kubernetes 中使用 JuiceFS;
+5. **多端共享**:同一文件系统可在上千台服务器同时挂载,高性能并发读写,共享数据;
+6. **强一致性**:确认的修改会在所有挂载了同一文件系统的服务器上立即可见,保证强一致性;
+7. **强悍性能**:毫秒级的延迟,近乎无限的吞吐量(取决于对象存储规模),查看[性能测试结果](docs/zh_cn/benchmark/benchmark.md);
+8. **数据安全**:支持传输中加密(encryption in transit)以及静态加密(encryption at rest),[查看详情](docs/zh_cn/security/encrypt.md);
+9. **文件锁**:支持 BSD 锁(flock)及 POSIX 锁(fcntl);
+10. **数据压缩**:支持使用 [LZ4](https://lz4.github.io/lz4) 或 [Zstandard](https://facebook.github.io/zstd) 压缩数据,节省存储空间;
+
+---
+
+[架构](#架构) | [开始使用](#开始使用) | [进阶主题](#进阶主题) | [POSIX 兼容性](#posix-兼容性测试) | [性能测试](#性能测试) | [支持的对象存储](#支持的对象存储) | [谁在使用](#谁在使用) | [产品路线图](#产品路线图) | [反馈问题](#反馈问题) | [贡献](#贡献) | [社区](#社区) | [使用量收集](#使用量收集) | [开源协议](#开源协议) | [致谢](#致谢) | [FAQ](#faq)
+
+---
+
+## 架构
+
+JuiceFS 由三个部分组成:
+
+1. **JuiceFS 客户端**:协调对象存储和元数据存储引擎,以及 POSIX、Hadoop、Kubernetes、S3 Gateway 等文件系统接口的实现;
+2. **数据存储**:存储数据本身,支持本地磁盘、对象存储;
+3. **元数据引擎**:存储数据对应的元数据,支持 Redis、MySQL、SQLite 等多种引擎;
+
+![JuiceFS Architecture](docs/zh_cn/images/juicefs-arch-new.png)
+
+JuiceFS 依靠 Redis 来存储文件的元数据。Redis 是基于内存的高性能的键值数据存储,非常适合存储元数据。与此同时,所有数据将通过 JuiceFS 客户端存储到对象存储中。[了解详情](docs/zh_cn/introduction/architecture.md)
+
+![JuiceFS Storage Format](docs/zh_cn/images/juicefs-storage-format-new.png)
+
+任何存入 JuiceFS 的文件都会被拆分成固定大小的 **"Chunk"**,默认的容量上限是 64 MiB。每个 Chunk 由一个或多个 **"Slice"** 组成,Slice 的长度不固定,取决于文件写入的方式。每个 Slice 又会被进一步拆分成固定大小的 **"Block"**,默认为 4 MiB。最后,这些 Block 会被存储到对象存储。与此同时,JuiceFS 会将每个文件以及它的 Chunks、Slices、Blocks 等元数据信息存储在元数据引擎中。[了解详情](docs/zh_cn/reference/how_juicefs_store_files.md)
+
+![How JuiceFS stores your files](docs/zh_cn/images/how-juicefs-stores-files-new.png)
+
+使用 JuiceFS,文件最终会被拆分成 Chunks、Slices 和 Blocks 存储在对象存储。因此,你会发现在对象存储平台的文件浏览器中找不到存入 JuiceFS 的源文件,存储桶中只有一个 chunks 目录和一堆数字编号的目录和文件。不要惊慌,这正是 JuiceFS 高性能运作的秘诀!
+
+## 开始使用
+
+创建 JuiceFS ,需要以下 3 个方面的准备:
+
+1. 准备 Redis 数据库
+2. 准备对象存储
+3. 下载安装 JuiceFS 客户端
+
+请参照 [快速上手指南](docs/zh_cn/getting-started/for_local.md) 立即开始使用 JuiceFS!
+
+### 命令索引
+
+请点击 [这里](docs/zh_cn/reference/command_reference.md) 查看所有子命令以及命令行参数。
+
+### Kubernetes
+
+在 Kubernetes 中使用 JuiceFS 非常便捷,请查看 [这个文档](docs/zh_cn/deployment/how_to_use_on_kubernetes.md) 了解更多信息。
+
+### Hadoop Java SDK
+
+JuiceFS 使用 [Hadoop Java SDK](docs/zh_cn/deployment/hadoop_java_sdk.md) 与 Hadoop 生态结合。
+
+## 进阶主题
+
+- [Redis 最佳实践](docs/zh_cn/administration/metadata/redis_best_practices.md)
+- [如何设置对象存储](docs/zh_cn/reference/how_to_setup_object_storage.md)
+- [缓存管理](docs/zh_cn/administration/cache_management.md)
+- [故障诊断和分析](docs/zh_cn/administration/fault_diagnosis_and_analysis.md)
+- [FUSE 挂载选项](docs/zh_cn/reference/fuse_mount_options.md)
+- [在 Windows 中使用 JuiceFS](docs/zh_cn/juicefs_on_windows.md)
+- [S3 网关](docs/zh_cn/deployment/s3_gateway.md)
+
+请查阅 [JuiceFS 用户手册](docs/zh_cn/README.md) 了解更多信息。
+
+## POSIX 兼容性测试
+
+JuiceFS 通过了 [pjdfstest](https://github.com/pjd/pjdfstest) 最新版所有 8813 项兼容性测试。
+
+```
+All tests successful.
+
+Test Summary Report
+-------------------
+/root/soft/pjdfstest/tests/chown/00.t (Wstat: 0 Tests: 1323 Failed: 0)
+ TODO passed: 693, 697, 708-709, 714-715, 729, 733
+Files=235, Tests=8813, 233 wallclock secs ( 2.77 usr 0.38 sys + 2.57 cusr 3.93 csys = 9.65 CPU)
+Result: PASS
+```
+
+除了 pjdfstests 覆盖的那些 POSIX 特性外,JuiceFS 还支持:
+
+- 关闭再打开(close-to-open)一致性。一旦一个文件写入完成并关闭,之后的打开和读操作保证可以访问之前写入的数据。如果是在同一个挂载点,所有写入的数据都可以立即读。
+- 重命名以及所有其他元数据操作都是原子的,由 Redis 的事务机制保证。
+- 当文件被删除后,同一个挂载点上如果已经打开了,文件还可以继续访问。
+- 支持 mmap
+- 支持 fallocate 以及空洞
+- 支持扩展属性
+- 支持 BSD 锁(flock)
+- 支持 POSIX 记录锁(fcntl)
+
+## 性能测试
+
+### 基础性能测试
+
+JuiceFS 提供一个性能测试的子命令来帮助你了解它在你的环境中的性能表现:
+
+![JuiceFS Bench](docs/zh_cn/images/juicefs-bench.png)
+
+### 顺序读写性能
+
+使用 [fio](https://github.com/axboe/fio) 测试了 JuiceFS、[EFS](https://aws.amazon.com/efs) 和 [S3FS](https://github.com/s3fs-fuse/s3fs-fuse) 的顺序读写性能,结果如下:
+
+![Sequential Read Write Benchmark](docs/zh_cn/images/sequential-read-write-benchmark.svg)
+
+上图显示 JuiceFS 可以比其他两者提供 10 倍以上的吞吐,详细结果请看[这里](docs/zh_cn/benchmark/fio.md)。
+
+### 元数据性能
+
+使用 [mdtest](https://github.com/hpc/ior) 测试了 JuiceFS、[EFS](https://aws.amazon.com/efs) 和 [S3FS](https://github.com/s3fs-fuse/s3fs-fuse) 的元数据性能,结果如下:
+
+![Metadata Benchmark](docs/zh_cn/images/metadata-benchmark.svg)
+
+上图显示 JuiceFS 的元数据性能显著优于其他两个,详细的测试报告请看[这里](docs/zh_cn/benchmark/mdtest.md)。
+
+### 性能分析
+
+在文件系统的根目录有一个叫做 `.accesslog` 的虚拟文件,它提供了所有文件系统操作的细节,以及所消耗的时间,比如:
+
+```bash
+$ cat /jfs/.accesslog
+2021.01.15 08:26:11.003330 [uid:0,gid:0,pid:4403] write (17669,8666,4993160): OK <0.000010>
+2021.01.15 08:26:11.003473 [uid:0,gid:0,pid:4403] write (17675,198,997439): OK <0.000014>
+2021.01.15 08:26:11.003616 [uid:0,gid:0,pid:4403] write (17666,390,951582): OK <0.000006>
+```
+
+每一行的最后一个数字是该操作所消耗的时间,单位是秒。你可以直接利用它来分析各种性能问题,或者尝试 `./juicefs profile /jfs` 命令实时监控统计信息。欲进一步了解此子命令请运行 `./juicefs profile -h` 或参阅[这里](docs/zh_cn/benchmark/operations_profiling.md)。
+
+## 支持的对象存储
+
+- 亚马逊 S3
+- 谷歌云存储
+- 微软云存储
+- 阿里云 OSS
+- 腾讯云 COS
+- 青云 QingStor 对象存储
+- Ceph RGW
+- MinIO
+- 本地目录
+- Redis
+
+JuiceFS 支持几乎所有主流的对象存储服务,[查看详情](docs/zh_cn/reference/how_to_setup_object_storage.md)。
+
+## 谁在使用
+
+JuiceFS 目前是 beta 状态,核心的存储格式还没有完全确定,如果要使用在生产环境中,请先进行细致认真的评估。如果你对它有兴趣,请尽早测试,并给我们[反馈](https://github.com/juicedata/juicefs/discussions)。
+
+欢迎你在使用 JuiceFS 后告诉我们,向大家分享你的使用经验。我们也收集汇总了一份名单在 [ADOPTERS_CN.md](ADOPTERS_CN.md) 中,也包括了其他开源项目与 JuiceFS 搭配使用的情况。
+
+## 产品路线图
+
+- 稳定存储格式
+- 支持使用 FoundationDB 做元数据引擎
+- 基于用户和组的配额
+- 基于目录的配额
+- 快照
+- 一次写入多次读取(WORM)
+
+## 反馈问题
+
+我们使用 [GitHub Issues](https://github.com/juicedata/juicefs/issues) 来管理社区反馈的问题,你也可以通过其他[渠道](#社区)跟社区联系。
+
+## 贡献
+
+感谢你的兴趣,请参考 [CONTRIBUTING.md](CONTRIBUTING.md)。
+
+## 社区
+
+欢迎加入 [Discussions](https://github.com/juicedata/juicefs/discussions) 和 [Slack 频道](https://join.slack.com/t/juicefs/shared_invite/zt-n9h5qdxh-0bJojPaql8cfFgwerDQJgA) 跟我们的团队和其他社区成员交流。
+
+## 使用量收集
+
+JuiceFS 的客户端会收集 **匿名** 使用数据来帮助我们更好地了解大家如何使用它,它只上报诸如版本号等使用量数据,不包含任何用户信息,完整的代码在 [这里](pkg/usage/usage.go)。
+
+你也可以通过下面的方式禁用它:
+
+```bash
+$ ./juicefs mount --no-usage-report
+```
+
+## 开源协议
+
+使用 Apache License 2.0 开源,详见 [LICENSE](LICENSE)。
+
+## 致谢
+
+JuiceFS 的设计参考了 [Google File System](https://research.google/pubs/pub51)、[HDFS](https://hadoop.apache.org) 以及 [MooseFS](https://moosefs.com),感谢他们的杰出工作。
+
+## FAQ
+
+### 为什么不支持某个对象存储?
+
+已经支持了绝大部分对象存储,参考这个[列表](docs/zh_cn/reference/how_to_setup_object_storage.md#支持的存储服务)。如果它跟 S3 兼容的话,也可以当成 S3 来使用。否则,请创建一个 issue 来增加支持。
+
+### 是否可以使用 Redis 集群版?
+
+不可以。JuiceFS 使用了 Redis 的[事务功能](https://redis.io/topics/transactions)来保证元数据操作的原子性,而分布式版还不支持分布式事务。哨兵节点或者其他的 Redis 高可用方法是需要的。
+
+请查看[「Redis 最佳实践」](docs/zh_cn/administration/metadata/redis_best_practices.md)了解更多信息。
+
+### JuiceFS 与 XXX 的区别是什么?
+
+请查看[「同类技术对比」](docs/zh_cn/comparison)文档了解更多信息。
+
+更多 FAQ 请查看[完整列表](docs/zh_cn/faq.md)。
diff --git a/check-changed.sh b/check-changed.sh
new file mode 100644
index 0000000..c546cb5
--- /dev/null
+++ b/check-changed.sh
@@ -0,0 +1,19 @@
+#!/bin/bash
+
+set -e
+
+if [ x"${TRAVIS_COMMIT_RANGE}" == x ] ; then
+ CHANGED_FILES=`git diff --name-only HEAD~1`
+else
+ CHANGED_FILES=`git diff --name-only $TRAVIS_COMMIT_RANGE`
+fi
+echo $CHANGED_FILES
+DOCS_DIR="docs/"
+SKIP_TEST=true
+
+for CHANGED_FILE in $CHANGED_FILES; do
+ if ! [[ $CHANGED_FILE =~ $DOCS_DIR ]] ; then
+ SKIP_TEST=false
+ break
+ fi
+done
\ No newline at end of file
diff --git a/cmd/bench.go b/cmd/bench.go
new file mode 100644
index 0000000..62e47b0
--- /dev/null
+++ b/cmd/bench.go
@@ -0,0 +1,452 @@
+/*
+ * JuiceFS, Copyright 2021 Juicedata, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package main
+
+import (
+ "fmt"
+ "math/rand"
+ "os"
+ "os/exec"
+ "path/filepath"
+ "runtime"
+ "strconv"
+ "strings"
+ "sync"
+ "time"
+
+ "github.com/juicedata/juicefs/pkg/utils"
+ "github.com/mattn/go-isatty"
+ "github.com/urfave/cli/v2"
+)
+
+var resultRange = map[string][4]float64{
+ "bigwr": {100, 200, 10, 50},
+ "bigrd": {100, 200, 10, 50},
+ "smallwr": {12.5, 20, 50, 80},
+ "smallrd": {50, 100, 10, 20},
+ "stat": {20, 1000, 1, 5},
+ "fuse": {0, 0, 0.5, 2},
+ "meta": {0, 0, 2, 5},
+ "put": {0, 0, 100, 200},
+ "get": {0, 0, 100, 200},
+ "delete": {0, 0, 30, 100},
+ "cachewr": {0, 0, 10, 20},
+ "cacherd": {0, 0, 1, 5},
+}
+
+type benchCase struct {
+ bm *benchmark
+ name string
+ fsize, bsize int // file/block size in Bytes
+ fcount, bcount int // file/block count
+ wbar, rbar, sbar *utils.Bar // progress bar for write/read/stat
+}
+
+type benchmark struct {
+ tty bool
+ big, small *benchCase
+ threads int
+ tmpdir string
+}
+
+func (bc *benchCase) writeFiles(index int) {
+ for i := 0; i < bc.fcount; i++ {
+ fname := fmt.Sprintf("%s/%s.%d.%d", bc.bm.tmpdir, bc.name, index, i)
+ fp, err := os.OpenFile(fname, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, 0644)
+ if err != nil {
+ logger.Fatalf("Failed to open file %s: %s", fname, err)
+ }
+ buf := make([]byte, bc.bsize)
+ _, _ = rand.Read(buf)
+ for j := 0; j < bc.bcount; j++ {
+ if _, err = fp.Write(buf); err != nil {
+ logger.Fatalf("Failed to write file %s: %s", fname, err)
+ }
+ bc.wbar.Increment()
+ }
+ _ = fp.Close()
+ }
+}
+
+func (bc *benchCase) readFiles(index int) {
+ for i := 0; i < bc.fcount; i++ {
+ fname := fmt.Sprintf("%s/%s.%d.%d", bc.bm.tmpdir, bc.name, index, i)
+ fp, err := os.Open(fname)
+ if err != nil {
+ logger.Fatalf("Failed to open file %s: %s", fname, err)
+ }
+ buf := make([]byte, bc.bsize)
+ for j := 0; j < bc.bcount; j++ {
+ if n, err := fp.Read(buf); err != nil || n != bc.bsize {
+ logger.Fatalf("Failed to read file %s: %d %s", fname, n, err)
+ }
+ bc.rbar.Increment()
+ }
+ _ = fp.Close()
+ }
+}
+
+func (bc *benchCase) statFiles(index int) {
+ for i := 0; i < bc.fcount; i++ {
+ fname := fmt.Sprintf("%s/%s.%d.%d", bc.bm.tmpdir, bc.name, index, i)
+ if _, err := os.Stat(fname); err != nil {
+ logger.Fatalf("Failed to stat file %s: %s", fname, err)
+ }
+ bc.sbar.Increment()
+ }
+}
+
+func (bc *benchCase) run(test string) float64 {
+ var fn func(int)
+ switch test {
+ case "write":
+ fn = bc.writeFiles
+ case "read":
+ fn = bc.readFiles
+ case "stat":
+ fn = bc.statFiles
+ } // default: fatal
+ var wg sync.WaitGroup
+ start := time.Now()
+ for i := 0; i < bc.bm.threads; i++ {
+ index := i
+ wg.Add(1)
+ go func() {
+ fn(index)
+ wg.Done()
+ }()
+ }
+ wg.Wait()
+ return time.Since(start).Seconds()
+}
+
+// blockSize, bigSize in MiB; smallSize in KiB
+func newBenchmark(tmpdir string, blockSize, bigSize, smallSize, smallCount, threads int) *benchmark {
+ bm := &benchmark{threads: threads, tmpdir: tmpdir}
+ if bigSize > 0 {
+ bm.big = bm.newCase("bigfile", bigSize<<20, 1, blockSize<<20)
+ }
+ if smallSize > 0 && smallCount > 0 {
+ bm.small = bm.newCase("smallfile", smallSize<<10, smallCount, blockSize<<20)
+ }
+ return bm
+}
+
+func (bm *benchmark) newCase(name string, fsize, fcount, bsize int) *benchCase {
+ bc := &benchCase{
+ bm: bm,
+ name: name,
+ fsize: fsize,
+ fcount: fcount,
+ bsize: bsize,
+ }
+ if fsize <= bsize {
+ bc.bcount = 1
+ bc.bsize = fsize
+ } else {
+ bc.bcount = (fsize-1)/bsize + 1
+ bc.fsize = bc.bcount * bsize
+ }
+ return bc
+}
+
+func (bm *benchmark) colorize(item string, value, cost float64, prec int) (string, string) {
+ svalue := strconv.FormatFloat(value, 'f', prec, 64)
+ scost := strconv.FormatFloat(cost, 'f', 2, 64)
+ if bm.tty {
+ r, ok := resultRange[item]
+ if !ok {
+ logger.Fatalf("Invalid item: %s", item)
+ }
+ if item == "smallwr" || item == "smallrd" || item == "stat" {
+ r[0] *= float64(bm.threads)
+ r[1] *= float64(bm.threads)
+ }
+ var color int
+ if value > r[1] { // max
+ color = GREEN
+ } else if value > r[0] { // min
+ color = YELLOW
+ } else {
+ color = RED
+ }
+ svalue = fmt.Sprintf("%s%dm%s%s", COLOR_SEQ, color, svalue, RESET_SEQ)
+ if cost < r[2] { // min
+ color = GREEN
+ } else if cost < r[3] { // max
+ color = YELLOW
+ } else {
+ color = RED
+ }
+ scost = fmt.Sprintf("%s%dm%s%s", COLOR_SEQ, color, scost, RESET_SEQ)
+ }
+ return svalue, scost
+}
+
+func (bm *benchmark) printResult(result [][3]string) {
+ var rawmax, max [3]int
+ for _, l := range result {
+ for i := 0; i < 3; i++ {
+ if len(l[i]) > rawmax[i] {
+ rawmax[i] = len(l[i])
+ }
+ }
+ }
+ max = rawmax
+ if bm.tty {
+ max[1] -= 11 // no color chars
+ max[2] -= 11
+ }
+
+ var b strings.Builder
+ for i := 0; i < 3; i++ {
+ b.WriteByte('+')
+ b.WriteString(strings.Repeat("-", max[i]+2))
+ }
+ b.WriteByte('+')
+ divider := b.String()
+ fmt.Println(divider)
+
+ b.Reset()
+ header := []string{"ITEM", "VALUE", "COST"}
+ for i := 0; i < 3; i++ {
+ b.WriteString(" | ")
+ b.WriteString(padding(header[i], max[i], ' '))
+ }
+ b.WriteString(" |")
+ fmt.Println(b.String()[1:])
+ fmt.Println(divider)
+
+ for _, l := range result {
+ b.Reset()
+ for i := 0; i < 3; i++ {
+ b.WriteString(" | ")
+ if spaces := rawmax[i] - len(l[i]); spaces > 0 {
+ b.WriteString(strings.Repeat(" ", spaces))
+ }
+ b.WriteString(l[i])
+ }
+ b.WriteString(" |")
+ fmt.Println(b.String()[1:])
+ }
+ fmt.Println(divider)
+}
+
+func bench(ctx *cli.Context) error {
+ setLoggerLevel(ctx)
+
+ /* --- Pre-check --- */
+ if ctx.Uint("block-size") == 0 || ctx.Uint("threads") == 0 {
+ return os.ErrInvalid
+ }
+ if ctx.NArg() < 1 {
+ logger.Fatalln("PATH must be provided")
+ }
+ tmpdir, err := filepath.Abs(ctx.Args().First())
+ if err != nil {
+ logger.Fatalf("Failed to get absolute path of %s: %s", ctx.Args().First(), err)
+ }
+ tmpdir = filepath.Join(tmpdir, fmt.Sprintf("__juicefs_benchmark_%d__", time.Now().UnixNano()))
+ bm := newBenchmark(tmpdir, int(ctx.Uint("block-size")), int(ctx.Uint("big-file-size")),
+ int(ctx.Uint("small-file-size")), int(ctx.Uint("small-file-count")), int(ctx.Uint("threads")))
+ if bm.big == nil && bm.small == nil {
+ return os.ErrInvalid
+ }
+ var purgeArgs []string
+ if os.Getuid() != 0 {
+ purgeArgs = append(purgeArgs, "sudo")
+ }
+ switch runtime.GOOS {
+ case "darwin":
+ purgeArgs = append(purgeArgs, "purge")
+ case "linux":
+ purgeArgs = append(purgeArgs, "/bin/sh", "-c", "echo 3 > /proc/sys/vm/drop_caches")
+ default:
+ logger.Fatal("Currently only support Linux/macOS")
+ }
+
+ /* --- Prepare --- */
+ if _, err := os.Stat(bm.tmpdir); os.IsNotExist(err) {
+ if err = os.MkdirAll(bm.tmpdir, 0755); err != nil {
+ logger.Fatalf("Failed to create %s: %s", bm.tmpdir, err)
+ }
+ }
+ var statsPath string
+ for mp := filepath.Dir(bm.tmpdir); mp != "/"; mp = filepath.Dir(mp) {
+ if _, err := os.Stat(filepath.Join(mp, ".stats")); err == nil {
+ statsPath = filepath.Join(mp, ".stats")
+ break
+ }
+ }
+ dropCaches := func() {
+ if os.Getenv("SKIP_DROP_CACHES") != "true" {
+ if err := exec.Command(purgeArgs[0], purgeArgs[1:]...).Run(); err != nil {
+ logger.Warnf("Failed to clean kernel caches: %s", err)
+ }
+ } else {
+ logger.Warnf("Clear cache operation has been skipped")
+ }
+ }
+ if os.Getuid() != 0 {
+ fmt.Println("Cleaning kernel cache, may ask for root privilege...")
+ }
+ dropCaches()
+ bm.tty = isatty.IsTerminal(os.Stdout.Fd())
+ progress := utils.NewProgress(!bm.tty, false)
+ if b := bm.big; b != nil {
+ total := int64(bm.threads * b.fcount * b.bcount)
+ b.wbar = progress.AddCountBar("Write big", total)
+ b.rbar = progress.AddCountBar("Read big", total)
+ }
+ if s := bm.small; s != nil {
+ total := int64(bm.threads * s.fcount * s.bcount)
+ s.wbar = progress.AddCountBar("Write small", total)
+ s.rbar = progress.AddCountBar("Read small", total)
+ s.sbar = progress.AddCountBar("Stat file", int64(bm.threads*s.fcount))
+ }
+
+ /* --- Run Benchmark --- */
+ var stats map[string]float64
+ if statsPath != "" {
+ stats = readStats(statsPath)
+ }
+ var result [][3]string
+ if b := bm.big; b != nil {
+ cost := b.run("write")
+ line := [3]string{"Write big file"}
+ line[1], line[2] = bm.colorize("bigwr", float64((b.fsize>>20)*b.fcount*bm.threads)/cost, cost/float64(b.fcount), 2)
+ line[1] += " MiB/s"
+ line[2] += " s/file"
+ result = append(result, line)
+ dropCaches()
+
+ cost = b.run("read")
+ line[0] = "Read big file"
+ line[1], line[2] = bm.colorize("bigrd", float64((b.fsize>>20)*b.fcount*bm.threads)/cost, cost/float64(b.fcount), 2)
+ line[1] += " MiB/s"
+ line[2] += " s/file"
+ result = append(result, line)
+ }
+ if s := bm.small; s != nil {
+ cost := s.run("write")
+ line := [3]string{"Write small file"}
+ line[1], line[2] = bm.colorize("smallwr", float64(s.fcount*bm.threads)/cost, cost*1000/float64(s.fcount), 1)
+ line[1] += " files/s"
+ line[2] += " ms/file"
+ result = append(result, line)
+ dropCaches()
+
+ cost = s.run("read")
+ line[0] = "Read small file"
+ line[1], line[2] = bm.colorize("smallrd", float64(s.fcount*bm.threads)/cost, cost*1000/float64(s.fcount), 1)
+ line[1] += " files/s"
+ line[2] += " ms/file"
+ result = append(result, line)
+ dropCaches()
+
+ cost = s.run("stat")
+ line[0] = "Stat file"
+ line[1], line[2] = bm.colorize("stat", float64(s.fcount*bm.threads)/cost, cost*1000/float64(s.fcount), 1)
+ line[1] += " files/s"
+ line[2] += " ms/file"
+ result = append(result, line)
+ }
+ progress.Done()
+
+ /* --- Clean-up --- */
+ if err := exec.Command("rm", "-rf", bm.tmpdir).Run(); err != nil {
+ logger.Warnf("Failed to cleanup %s: %s", bm.tmpdir, err)
+ }
+
+ /* --- Report --- */
+ fmt.Println("Benchmark finished!")
+ fmt.Printf("BlockSize: %d MiB, BigFileSize: %d MiB, SmallFileSize: %d KiB, SmallFileCount: %d, NumThreads: %d\n",
+ ctx.Uint("block-size"), ctx.Uint("big-file-size"), ctx.Uint("small-file-size"), ctx.Uint("small-file-count"), ctx.Uint("threads"))
+ if stats != nil {
+ stats2 := readStats(statsPath)
+ diff := func(item string) float64 {
+ return stats2["juicefs_"+item] - stats["juicefs_"+item]
+ }
+ show := func(title, nick, item string) {
+ count := diff(item + "_total")
+ var cost float64
+ if count > 0 {
+ cost = diff(item+"_sum") * 1000 / count
+ }
+ line := [3]string{title}
+ line[1], line[2] = bm.colorize(nick, count, cost, 0)
+ line[1] += " operations"
+ line[2] += " ms/op"
+ result = append(result, line)
+ }
+ show("FUSE operation", "fuse", "fuse_ops_durations_histogram_seconds")
+ show("Update meta", "meta", "transaction_durations_histogram_seconds")
+ show("Put object", "put", "object_request_durations_histogram_seconds_PUT")
+ show("Get object", "get", "object_request_durations_histogram_seconds_GET")
+ show("Delete object", "delete", "object_request_durations_histogram_seconds_DELETE")
+ show("Write into cache", "cachewr", "blockcache_write_hist_seconds")
+ show("Read from cache", "cacherd", "blockcache_read_hist_seconds")
+ var fmtString string
+ if bm.tty {
+ greenSeq := fmt.Sprintf("%s%dm", COLOR_SEQ, GREEN)
+ fmtString = fmt.Sprintf("Time used: %s%%.1f%s s, CPU: %s%%.1f%s%%%%, Memory: %s%%.1f%s MiB\n",
+ greenSeq, RESET_SEQ, greenSeq, RESET_SEQ, greenSeq, RESET_SEQ)
+ } else {
+ fmtString = "Time used: %.1f s, CPU: %.1f%%, Memory: %.1f MiB\n"
+ }
+ fmt.Printf(fmtString, diff("uptime"), diff("cpu_usage")*100/diff("uptime"), stats2["juicefs_memory"]/1024/1024)
+ }
+ bm.printResult(result)
+ return nil
+}
+
+func benchFlags() *cli.Command {
+ return &cli.Command{
+ Name: "bench",
+ Usage: "run benchmark to read/write/stat big/small files",
+ Action: bench,
+ ArgsUsage: "PATH",
+ Flags: []cli.Flag{
+ &cli.UintFlag{
+ Name: "block-size",
+ Value: 1,
+ Usage: "block size in MiB",
+ },
+ &cli.UintFlag{
+ Name: "big-file-size",
+ Value: 1024,
+ Usage: "size of big file in MiB",
+ },
+ &cli.UintFlag{
+ Name: "small-file-size",
+ Value: 128,
+ Usage: "size of small file in KiB",
+ },
+ &cli.UintFlag{
+ Name: "small-file-count",
+ Value: 100,
+ Usage: "number of small files",
+ },
+ &cli.UintFlag{
+ Name: "threads",
+ Aliases: []string{"p"},
+ Value: 1,
+ Usage: "number of concurrent threads",
+ },
+ },
+ }
+}
diff --git a/cmd/bench_test.go b/cmd/bench_test.go
new file mode 100644
index 0000000..4d76144
--- /dev/null
+++ b/cmd/bench_test.go
@@ -0,0 +1,44 @@
+/*
+ * JuiceFS, Copyright 2021 Juicedata, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package main
+
+import (
+ "os"
+ "testing"
+)
+
+func TestBench(t *testing.T) {
+ metaUrl := "redis://127.0.0.1:6379/10"
+ mountpoint := "/tmp/testDir"
+ defer ResetRedis(metaUrl)
+ if err := MountTmp(metaUrl, mountpoint); err != nil {
+ t.Fatalf("mount failed: %v", err)
+ }
+ defer func(mountpoint string) {
+ err := UmountTmp(mountpoint)
+ if err != nil {
+ t.Fatalf("umount failed: %v", err)
+ }
+ }(mountpoint)
+ benchArgs := []string{"", "bench", mountpoint}
+ os.Setenv("SKIP_DROP_CACHES", "true")
+ defer os.Unsetenv("SKIP_DROP_CACHES")
+ err := Main(benchArgs)
+ if err != nil {
+ t.Fatalf("test bench failed: %v", err)
+ }
+}
diff --git a/cmd/config.go b/cmd/config.go
new file mode 100644
index 0000000..8e83c67
--- /dev/null
+++ b/cmd/config.go
@@ -0,0 +1,187 @@
+/*
+ * JuiceFS, Copyright 2021 Juicedata, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package main
+
+import (
+ "bufio"
+ "fmt"
+ "os"
+ "strings"
+
+ "github.com/juicedata/juicefs/pkg/meta"
+ "github.com/urfave/cli/v2"
+)
+
+func warn(format string, a ...interface{}) {
+ fmt.Printf("\033[1;33mWARNING\033[0m: "+format+"\n", a...)
+}
+
+func userConfirmed() bool {
+ fmt.Print("Proceed anyway? [y/N]: ")
+ scanner := bufio.NewScanner(os.Stdin)
+ for scanner.Scan() {
+ if text := strings.ToLower(scanner.Text()); text == "y" || text == "yes" {
+ return true
+ } else if text == "" || text == "n" || text == "no" {
+ return false
+ } else {
+ fmt.Print("Please input y(yes) or n(no): ")
+ }
+ }
+ return false
+}
+
+func config(ctx *cli.Context) error {
+ setLoggerLevel(ctx)
+ if ctx.Args().Len() < 1 {
+ return fmt.Errorf("META-URL is needed")
+ }
+ removePassword(ctx.Args().Get(0))
+ m := meta.NewClient(ctx.Args().Get(0), &meta.Config{Retries: 10, Strict: true})
+
+ format, err := m.Load()
+ if err != nil {
+ return err
+ }
+ if len(ctx.LocalFlagNames()) == 0 {
+ format.RemoveSecret()
+ printJson(format)
+ return nil
+ }
+
+ var quota, storage, trash bool
+ var msg strings.Builder
+ for _, flag := range ctx.LocalFlagNames() {
+ switch flag {
+ case "capacity":
+ if new := ctx.Uint64(flag); new != format.Capacity>>30 {
+ msg.WriteString(fmt.Sprintf("%10s: %d GiB -> %d GiB\n", flag, format.Capacity>>30, new))
+ format.Capacity = new << 30
+ quota = true
+ }
+ case "inodes":
+ if new := ctx.Uint64(flag); new != format.Inodes {
+ msg.WriteString(fmt.Sprintf("%10s: %d -> %d\n", flag, format.Inodes, new))
+ format.Inodes = new
+ quota = true
+ }
+ case "bucket":
+ if new := ctx.String(flag); new != format.Bucket {
+ msg.WriteString(fmt.Sprintf("%10s: %s -> %s\n", flag, format.Bucket, new))
+ format.Bucket = new
+ storage = true
+ }
+ case "access-key":
+ if new := ctx.String(flag); new != format.AccessKey {
+ msg.WriteString(fmt.Sprintf("%10s: %s -> %s\n", flag, format.AccessKey, new))
+ format.AccessKey = new
+ storage = true
+ }
+ case "secret-key":
+ if new := ctx.String(flag); new != format.SecretKey {
+ msg.WriteString(fmt.Sprintf("%10s: updated\n", flag))
+ format.SecretKey = new
+ storage = true
+ }
+ case "trash-days":
+ if new := ctx.Int(flag); new != format.TrashDays {
+ msg.WriteString(fmt.Sprintf("%10s: %d -> %d\n", flag, format.TrashDays, new))
+ format.TrashDays = new
+ trash = true
+ }
+ }
+ }
+ if msg.Len() == 0 {
+ fmt.Println("Nothing changed.")
+ return nil
+ }
+
+ if !ctx.Bool("force") {
+ if storage {
+ blob, err := createStorage(format)
+ if err != nil {
+ return err
+ }
+ if err = test(blob); err != nil {
+ return err
+ }
+ }
+ if quota {
+ var totalSpace, availSpace, iused, iavail uint64
+ _ = m.StatFS(meta.Background, &totalSpace, &availSpace, &iused, &iavail)
+ usedSpace := totalSpace - availSpace
+ if format.Capacity > 0 && usedSpace >= format.Capacity ||
+ format.Inodes > 0 && iused >= format.Inodes {
+ warn("New quota is too small (used / quota): %d / %d bytes, %d / %d inodes.",
+ usedSpace, format.Capacity, iused, format.Inodes)
+ if !userConfirmed() {
+ return fmt.Errorf("Aborted.")
+ }
+ }
+ }
+ if trash && format.TrashDays == 0 {
+ warn("The current trash will be emptied and future removed files will purged immediately.")
+ if !userConfirmed() {
+ return fmt.Errorf("Aborted.")
+ }
+ }
+ }
+
+ if err = m.Init(*format, false); err == nil {
+ fmt.Println(msg.String()[:msg.Len()-1])
+ }
+ return err
+}
+
+func configFlags() *cli.Command {
+ return &cli.Command{
+ Name: "config",
+ Usage: "change config of a volume",
+ ArgsUsage: "META-URL",
+ Action: config,
+ Flags: []cli.Flag{
+ &cli.Uint64Flag{
+ Name: "capacity",
+ Usage: "the limit for space in GiB",
+ },
+ &cli.Uint64Flag{
+ Name: "inodes",
+ Usage: "the limit for number of inodes",
+ },
+ &cli.StringFlag{
+ Name: "bucket",
+ Usage: "A bucket URL to store data",
+ },
+ &cli.StringFlag{
+ Name: "access-key",
+ Usage: "Access key for object storage",
+ },
+ &cli.StringFlag{
+ Name: "secret-key",
+ Usage: "Secret key for object storage",
+ },
+ &cli.IntFlag{
+ Name: "trash-days",
+ Usage: "number of days after which removed files will be permanently deleted",
+ },
+ &cli.BoolFlag{
+ Name: "force",
+ Usage: "skip sanity check and force update the configurations",
+ },
+ },
+ }
+}
diff --git a/cmd/config_test.go b/cmd/config_test.go
new file mode 100644
index 0000000..606d390
--- /dev/null
+++ b/cmd/config_test.go
@@ -0,0 +1,82 @@
+/*
+ * JuiceFS, Copyright 2021 Juicedata, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package main
+
+import (
+ "encoding/json"
+ "os"
+ "testing"
+
+ "github.com/agiledragon/gomonkey/v2"
+ "github.com/juicedata/juicefs/pkg/meta"
+)
+
+func getStdout(args []string) ([]byte, error) {
+ tmp, err := os.CreateTemp("/tmp", "jfstest-*")
+ if err != nil {
+ return nil, err
+ }
+ defer tmp.Close()
+ defer os.Remove(tmp.Name())
+ patch := gomonkey.ApplyGlobalVar(os.Stdout, *tmp)
+ defer patch.Reset()
+
+ if err = Main(args); err != nil {
+ return nil, err
+ }
+ return os.ReadFile(tmp.Name())
+}
+
+func TestConfig(t *testing.T) {
+ metaUrl := "redis://localhost:6379/10"
+ ResetRedis(metaUrl)
+ if err := Main([]string{"", "format", metaUrl, "--bucket", "/tmp/testBucket", "test"}); err != nil {
+ t.Fatalf("format: %s", err)
+ }
+
+ if err := Main([]string{"", "config", metaUrl, "--trash-days", "2"}); err != nil {
+ t.Fatalf("config: %s", err)
+ }
+ data, err := getStdout([]string{"", "config", metaUrl})
+ if err != nil {
+ t.Fatalf("getStdout: %s", err)
+ }
+ var format meta.Format
+ if err = json.Unmarshal(data, &format); err != nil {
+ t.Fatalf("json unmarshal: %s", err)
+ }
+ if format.TrashDays != 2 {
+ t.Fatalf("trash-days %d != expect 2", format.TrashDays)
+ }
+
+ if err = Main([]string{"", "config", metaUrl, "--capacity", "10", "--inodes", "1000000"}); err != nil {
+ t.Fatalf("config: %s", err)
+ }
+ if err = Main([]string{"", "config", metaUrl, "--bucket", "/tmp/newBucket", "--access-key", "testAK", "--secret-key", "testSK"}); err != nil {
+ t.Fatalf("config: %s", err)
+ }
+ if data, err = getStdout([]string{"", "config", metaUrl}); err != nil {
+ t.Fatalf("getStdout: %s", err)
+ }
+ if err = json.Unmarshal(data, &format); err != nil {
+ t.Fatalf("json unmarshal: %s", err)
+ }
+ if format.Capacity != 10<<30 || format.Inodes != 1000000 ||
+ format.Bucket != "/tmp/newBucket" || format.AccessKey != "testAK" || format.SecretKey != "removed" {
+ t.Fatalf("unexpect format: %+v", format)
+ }
+}
diff --git a/cmd/destroy.go b/cmd/destroy.go
new file mode 100644
index 0000000..3be0789
--- /dev/null
+++ b/cmd/destroy.go
@@ -0,0 +1,147 @@
+/*
+ * JuiceFS, Copyright 2021 Juicedata, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package main
+
+import (
+ "fmt"
+ "sort"
+ "sync"
+
+ "github.com/juicedata/juicefs/pkg/meta"
+ osync "github.com/juicedata/juicefs/pkg/sync"
+ "github.com/juicedata/juicefs/pkg/utils"
+ "github.com/urfave/cli/v2"
+)
+
+func destroy(ctx *cli.Context) error {
+ setLoggerLevel(ctx)
+ if ctx.Args().Len() < 2 {
+ return fmt.Errorf("META-URL and UUID are required")
+ }
+ removePassword(ctx.Args().Get(0))
+ m := meta.NewClient(ctx.Args().Get(0), &meta.Config{Retries: 10, Strict: true})
+
+ format, err := m.Load()
+ if err != nil {
+ logger.Fatalf("load setting: %s", err)
+ }
+ if uuid := ctx.Args().Get(1); uuid != format.UUID {
+ logger.Fatalf("UUID %s != expected %s", uuid, format.UUID)
+ }
+
+ if !ctx.Bool("force") {
+ m.CleanStaleSessions()
+ sessions, err := m.ListSessions()
+ if err != nil {
+ logger.Fatalf("list sessions: %s", err)
+ }
+ if num := len(sessions); num > 0 {
+ logger.Fatalf("%d sessions are active, please disconnect them first", num)
+ }
+ var totalSpace, availSpace, iused, iavail uint64
+ _ = m.StatFS(meta.Background, &totalSpace, &availSpace, &iused, &iavail)
+
+ fmt.Printf(" volume name: %s\n", format.Name)
+ fmt.Printf(" volume UUID: %s\n", format.UUID)
+ fmt.Printf("data storage: %s://%s\n", format.Storage, format.Bucket)
+ fmt.Printf(" used bytes: %d\n", totalSpace-availSpace)
+ fmt.Printf(" used inodes: %d\n", iused)
+ warn("The target volume will be destoried permanently, including:")
+ warn("1. objects in the data storage")
+ warn("2. entries in the metadata engine")
+ if !userConfirmed() {
+ logger.Fatalln("Aborted.")
+ }
+ }
+
+ blob, err := createStorage(format)
+ if err != nil {
+ logger.Fatalf("create object storage: %s", err)
+ }
+ objs, err := osync.ListAll(blob, "", "")
+ if err != nil {
+ logger.Fatalf("list all objects: %s", err)
+ }
+ progress := utils.NewProgress(false, false)
+ spin := progress.AddCountSpinner("Deleted objects")
+ var failed int
+ var dirs []string
+ var mu sync.Mutex
+ var wg sync.WaitGroup
+ for i := 0; i < 8; i++ {
+ wg.Add(1)
+ go func() {
+ defer wg.Done()
+ for obj := range objs {
+ if obj == nil {
+ break // failed listing
+ }
+ if obj.IsDir() {
+ mu.Lock()
+ dirs = append(dirs, obj.Key())
+ mu.Unlock()
+ continue
+ }
+ if err := blob.Delete(obj.Key()); err == nil {
+ spin.Increment()
+ } else {
+ failed++
+ logger.Warnf("delete %s: %s", obj.Key(), err)
+ }
+ }
+ }()
+ }
+ wg.Wait()
+ sort.Strings(dirs)
+ for i := len(dirs) - 1; i >= 0; i-- {
+ if err := blob.Delete(dirs[i]); err == nil {
+ spin.Increment()
+ } else {
+ failed++
+ logger.Warnf("delete %s: %s", dirs[i], err)
+ }
+ }
+ progress.Done()
+ if progress.Quiet {
+ logger.Infof("Deleted %d objects", spin.Current())
+ }
+ if failed > 0 {
+ logger.Errorf("%d objects are failed to delete, please do it manually.", failed)
+ }
+
+ if err = m.Reset(); err != nil {
+ logger.Fatalf("reset meta: %s", err)
+ }
+
+ logger.Infof("The volume has been destroyed! You may need to delete cache directory manually.")
+ return nil
+}
+
+func destroyFlags() *cli.Command {
+ return &cli.Command{
+ Name: "destroy",
+ Usage: "destroy an existing volume",
+ ArgsUsage: "META-URL UUID",
+ Action: destroy,
+ Flags: []cli.Flag{
+ &cli.BoolFlag{
+ Name: "force",
+ Usage: "skip sanity check and force destroy the volume",
+ },
+ },
+ }
+}
diff --git a/cmd/dump.go b/cmd/dump.go
new file mode 100644
index 0000000..87a9f74
--- /dev/null
+++ b/cmd/dump.go
@@ -0,0 +1,66 @@
+/*
+ * JuiceFS, Copyright 2021 Juicedata, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package main
+
+import (
+ "fmt"
+ "io"
+ "os"
+
+ "github.com/juicedata/juicefs/pkg/meta"
+ "github.com/urfave/cli/v2"
+)
+
+func dump(ctx *cli.Context) error {
+ setLoggerLevel(ctx)
+ if ctx.Args().Len() < 1 {
+ return fmt.Errorf("META-URL is needed")
+ }
+ var fp io.WriteCloser
+ if ctx.Args().Len() == 1 {
+ fp = os.Stdout
+ } else {
+ var err error
+ fp, err = os.OpenFile(ctx.Args().Get(1), os.O_CREATE|os.O_WRONLY|os.O_TRUNC, 0644)
+ if err != nil {
+ return err
+ }
+ defer fp.Close()
+ }
+ removePassword(ctx.Args().Get(0))
+ m := meta.NewClient(ctx.Args().Get(0), &meta.Config{Retries: 10, Strict: true, Subdir: ctx.String("subdir")})
+ if err := m.DumpMeta(fp, 0); err != nil {
+ return err
+ }
+ logger.Infof("Dump metadata into %s succeed", ctx.Args().Get(1))
+ return nil
+}
+
+func dumpFlags() *cli.Command {
+ return &cli.Command{
+ Name: "dump",
+ Usage: "dump metadata into a JSON file",
+ ArgsUsage: "META-URL [FILE]",
+ Action: dump,
+ Flags: []cli.Flag{
+ &cli.StringFlag{
+ Name: "subdir",
+ Usage: "only dump a sub-directory.",
+ },
+ },
+ }
+}
diff --git a/cmd/dump_test.go b/cmd/dump_test.go
new file mode 100644
index 0000000..12e19da
--- /dev/null
+++ b/cmd/dump_test.go
@@ -0,0 +1,71 @@
+/*
+ * JuiceFS, Copyright 2021 Juicedata, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package main
+
+import (
+ "context"
+ "os"
+ "testing"
+
+ "github.com/go-redis/redis/v8"
+)
+
+func TestDumpAndLoad(t *testing.T) {
+ metaUrl := "redis://127.0.0.1:6379/10"
+ opt, err := redis.ParseURL(metaUrl)
+ if err != nil {
+ t.Fatalf("ParseURL: %v", err)
+ }
+ rdb := redis.NewClient(opt)
+ rdb.FlushDB(context.Background())
+
+ t.Run("Test Load", func(t *testing.T) {
+ loadArgs := []string{"", "load", metaUrl, "./../pkg/meta/metadata.sample"}
+ err = Main(loadArgs)
+ if err != nil {
+ t.Fatalf("load failed: %v", err)
+ }
+ if rdb.DBSize(context.Background()).Val() == 0 {
+ t.Fatalf("load error: %v", err)
+ }
+
+ })
+ t.Run("Test dump", func(t *testing.T) {
+ dumpArgs := []string{"", "dump", metaUrl, "/tmp/dump_test.json"}
+ err := Main(dumpArgs)
+ if err != nil {
+ t.Fatalf("dump error: %v", err)
+ }
+ _, err = os.Stat("/tmp/dump_test.json")
+ if err != nil {
+ t.Fatalf("dump error: %v", err)
+ }
+ })
+
+ t.Run("Test dump with subdir", func(t *testing.T) {
+ dumpArgs := []string{"", "dump", metaUrl, "/tmp/dump_subdir_test.json", "--subdir", "d1"}
+ err := Main(dumpArgs)
+ if err != nil {
+ t.Fatalf("dump error: %v", err)
+ }
+ _, err = os.Stat("/tmp/dump_subdir_test.json")
+ if err != nil {
+ t.Fatalf("dump error: %v", err)
+ }
+ })
+ rdb.FlushDB(context.Background())
+}
diff --git a/cmd/format.go b/cmd/format.go
new file mode 100644
index 0000000..26c9b01
--- /dev/null
+++ b/cmd/format.go
@@ -0,0 +1,318 @@
+/*
+ * JuiceFS, Copyright 2020 Juicedata, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package main
+
+import (
+ "bytes"
+ "fmt"
+ "io/ioutil"
+ "math/rand"
+ _ "net/http/pprof"
+ "os"
+ "path"
+ "regexp"
+ "runtime"
+ "strings"
+ "time"
+
+ "github.com/google/uuid"
+ "github.com/juicedata/juicefs/pkg/compress"
+ "github.com/juicedata/juicefs/pkg/meta"
+ "github.com/juicedata/juicefs/pkg/object"
+ "github.com/juicedata/juicefs/pkg/version"
+ "github.com/urfave/cli/v2"
+)
+
+func fixObjectSize(s int) int {
+ const min, max = 64, 16 << 10
+ var bits uint
+ for s > 1 {
+ bits++
+ s >>= 1
+ }
+ s = s << bits
+ if s < min {
+ s = min
+ } else if s > max {
+ s = max
+ }
+ return s
+}
+
+func createStorage(format *meta.Format) (object.ObjectStorage, error) {
+ object.UserAgent = "JuiceFS-" + version.Version()
+ var blob object.ObjectStorage
+ var err error
+ if format.Shards > 1 {
+ blob, err = object.NewSharded(strings.ToLower(format.Storage), format.Bucket, format.AccessKey, format.SecretKey, format.Shards)
+ } else {
+ blob, err = object.CreateStorage(strings.ToLower(format.Storage), format.Bucket, format.AccessKey, format.SecretKey)
+ }
+ if err != nil {
+ return nil, err
+ }
+ blob = object.WithPrefix(blob, format.Name+"/")
+
+ if format.EncryptKey != "" {
+ passphrase := os.Getenv("JFS_RSA_PASSPHRASE")
+ privKey, err := object.ParseRsaPrivateKeyFromPem(format.EncryptKey, passphrase)
+ if err != nil {
+ return nil, fmt.Errorf("load private key: %s", err)
+ }
+ encryptor := object.NewAESEncryptor(object.NewRSAEncryptor(privKey))
+ blob = object.NewEncrypted(blob, encryptor)
+ }
+ return blob, nil
+}
+
+var letters = []rune("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789")
+
+func randSeq(n int) string {
+ b := make([]rune, n)
+ for i := range b {
+ b[i] = letters[rand.Intn(len(letters))]
+ }
+ return string(b)
+}
+
+func doTesting(store object.ObjectStorage, key string, data []byte) error {
+ if err := store.Put(key, bytes.NewReader(data)); err != nil {
+ if strings.Contains(err.Error(), "Access Denied") {
+ return fmt.Errorf("Failed to put: %s", err)
+ }
+ if err2 := store.Create(); err2 != nil {
+ if strings.Contains(err.Error(), "NoSuchBucket") {
+ return fmt.Errorf("Failed to create bucket %s: %s, previous error: %s\nPlease create bucket %s manually, then format again.",
+ store, err2, err, store)
+ } else {
+ return fmt.Errorf("Failed to create bucket %s: %s, previous error: %s",
+ store, err2, err)
+ }
+ }
+ if err := store.Put(key, bytes.NewReader(data)); err != nil {
+ return fmt.Errorf("Failed to put: %s", err)
+ }
+ }
+ p, err := store.Get(key, 0, -1)
+ if err != nil {
+ return fmt.Errorf("Failed to get: %s", err)
+ }
+ data2, err := ioutil.ReadAll(p)
+ _ = p.Close()
+ if err != nil {
+ return err
+ }
+ if !bytes.Equal(data, data2) {
+ return fmt.Errorf("Read wrong data")
+ }
+ err = store.Delete(key)
+ if err != nil {
+ // it's OK to don't have delete permission
+ fmt.Printf("Failed to delete: %s", err)
+ }
+ return nil
+}
+
+func test(store object.ObjectStorage) error {
+ rand.Seed(time.Now().UnixNano())
+ key := "testing/" + randSeq(10)
+ data := make([]byte, 100)
+ _, _ = rand.Read(data)
+ nRetry := 3
+ var err error
+ for i := 0; i < nRetry; i++ {
+ err = doTesting(store, key, data)
+ if err == nil {
+ return nil
+ }
+ time.Sleep(time.Second * time.Duration(i*3+1))
+ }
+ return err
+}
+
+func format(c *cli.Context) error {
+ setLoggerLevel(c)
+ if c.Args().Len() < 1 {
+ logger.Fatalf("Meta URL and name are required")
+ }
+ removePassword(c.Args().Get(0))
+ m := meta.NewClient(c.Args().Get(0), &meta.Config{Retries: 2})
+
+ if c.Args().Len() < 2 {
+ logger.Fatalf("Please give it a name")
+ }
+ name := c.Args().Get(1)
+ validName := regexp.MustCompile(`^[a-z0-9][a-z0-9\-]{1,61}[a-z0-9]$`)
+ if !validName.MatchString(name) {
+ logger.Fatalf("invalid name: %s, only alphabet, number and - are allowed, and the length should be 3 to 63 characters.", name)
+ }
+
+ compressor := compress.NewCompressor(c.String("compress"))
+ if compressor == nil {
+ logger.Fatalf("Unsupported compress algorithm: %s", c.String("compress"))
+ }
+ if c.Bool("no-update") {
+ if _, err := m.Load(); err == nil {
+ return nil
+ }
+ }
+
+ format := meta.Format{
+ Name: name,
+ UUID: uuid.New().String(),
+ Storage: c.String("storage"),
+ Bucket: c.String("bucket"),
+ AccessKey: c.String("access-key"),
+ SecretKey: c.String("secret-key"),
+ Shards: c.Int("shards"),
+ Capacity: c.Uint64("capacity") << 30,
+ Inodes: c.Uint64("inodes"),
+ BlockSize: fixObjectSize(c.Int("block-size")),
+ Compression: c.String("compress"),
+ TrashDays: c.Int("trash-days"),
+ }
+ if format.AccessKey == "" && os.Getenv("ACCESS_KEY") != "" {
+ format.AccessKey = os.Getenv("ACCESS_KEY")
+ _ = os.Unsetenv("ACCESS_KEY")
+ }
+ if format.SecretKey == "" && os.Getenv("SECRET_KEY") != "" {
+ format.SecretKey = os.Getenv("SECRET_KEY")
+ _ = os.Unsetenv("SECRET_KEY")
+ }
+
+ if format.Storage == "file" && !strings.HasSuffix(format.Bucket, "/") {
+ format.Bucket += "/"
+ }
+
+ keyPath := c.String("encrypt-rsa-key")
+ if keyPath != "" {
+ pem, err := ioutil.ReadFile(keyPath)
+ if err != nil {
+ logger.Fatalf("load RSA key from %s: %s", keyPath, err)
+ }
+ format.EncryptKey = string(pem)
+ }
+
+ blob, err := createStorage(&format)
+ if err != nil {
+ logger.Fatalf("object storage: %s", err)
+ }
+ logger.Infof("Data use %s", blob)
+ if os.Getenv("JFS_NO_CHECK_OBJECT_STORAGE") == "" {
+ if err := test(blob); err != nil {
+ logger.Fatalf("Storage %s is not configured correctly: %s", blob, err)
+ }
+ }
+
+ if !c.Bool("force") && format.Compression == "none" { // default
+ if old, err := m.Load(); err == nil && old.Compression == "lz4" { // lz4 is the previous default algr
+ format.Compression = old.Compression // keep the existing default compress algr
+ }
+ }
+ err = m.Init(format, c.Bool("force"))
+ if err != nil {
+ logger.Fatalf("format: %s", err)
+ }
+ format.RemoveSecret()
+ logger.Infof("Volume is formatted as %+v", format)
+ return nil
+}
+
+func formatFlags() *cli.Command {
+ var defaultBucket string
+ switch runtime.GOOS {
+ case "darwin":
+ homeDir, err := os.UserHomeDir()
+ if err != nil {
+ logger.Fatalf("%v", err)
+ }
+ defaultBucket = path.Join(homeDir, ".juicefs", "local")
+ case "windows":
+ defaultBucket = path.Join("C:/jfs/local")
+ default:
+ defaultBucket = "/var/jfs"
+ }
+ return &cli.Command{
+ Name: "format",
+ Usage: "format a volume",
+ ArgsUsage: "META-URL NAME",
+ Flags: []cli.Flag{
+ &cli.IntFlag{
+ Name: "block-size",
+ Value: 4096,
+ Usage: "size of block in KiB",
+ },
+ &cli.Uint64Flag{
+ Name: "capacity",
+ Value: 0,
+ Usage: "the limit for space in GiB",
+ },
+ &cli.Uint64Flag{
+ Name: "inodes",
+ Value: 0,
+ Usage: "the limit for number of inodes",
+ },
+ &cli.StringFlag{
+ Name: "compress",
+ Value: "none",
+ Usage: "compression algorithm (lz4, zstd, none)",
+ },
+ &cli.IntFlag{
+ Name: "shards",
+ Value: 0,
+ Usage: "store the blocks into N buckets by hash of key",
+ },
+ &cli.StringFlag{
+ Name: "storage",
+ Value: "file",
+ Usage: "Object storage type (e.g. s3, gcs, oss, cos)",
+ },
+ &cli.StringFlag{
+ Name: "bucket",
+ Value: defaultBucket,
+ Usage: "A bucket URL to store data",
+ },
+ &cli.StringFlag{
+ Name: "access-key",
+ Usage: "Access key for object storage (env ACCESS_KEY)",
+ },
+ &cli.StringFlag{
+ Name: "secret-key",
+ Usage: "Secret key for object storage (env SECRET_KEY)",
+ },
+ &cli.StringFlag{
+ Name: "encrypt-rsa-key",
+ Usage: "A path to RSA private key (PEM)",
+ },
+ &cli.IntFlag{
+ Name: "trash-days",
+ Value: 1,
+ Usage: "number of days after which removed files will be permanently deleted",
+ },
+
+ &cli.BoolFlag{
+ Name: "force",
+ Usage: "overwrite existing format",
+ },
+ &cli.BoolFlag{
+ Name: "no-update",
+ Usage: "don't update existing volume",
+ },
+ },
+ Action: format,
+ }
+}
diff --git a/cmd/format_test.go b/cmd/format_test.go
new file mode 100644
index 0000000..c3159c4
--- /dev/null
+++ b/cmd/format_test.go
@@ -0,0 +1,94 @@
+/*
+ * JuiceFS, Copyright 2020 Juicedata, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package main
+
+import (
+ "context"
+ "encoding/json"
+ "testing"
+
+ "github.com/juicedata/juicefs/pkg/meta"
+
+ "github.com/go-redis/redis/v8"
+)
+
+func TestFixObjectSize(t *testing.T) {
+ t.Run("Should make sure the size is in range", func(t *testing.T) {
+ cases := []struct {
+ input, expected int
+ }{
+ {30, 64},
+ {0, 64},
+ {2 << 30, 16 << 10},
+ {16 << 11, 16 << 10},
+ }
+ for _, c := range cases {
+ if size := fixObjectSize(c.input); size != c.expected {
+ t.Fatalf("Expected %d, got %d", c.expected, size)
+ }
+ }
+ })
+ t.Run("Should use powers of two", func(t *testing.T) {
+ cases := []struct {
+ input, expected int
+ }{
+ {150, 128},
+ {99, 64},
+ {1077, 1024},
+ }
+ for _, c := range cases {
+ if size := fixObjectSize(c.input); size != c.expected {
+ t.Fatalf("Expected %d, got %d", c.expected, size)
+ }
+ }
+ })
+}
+
+func TestFormat(t *testing.T) {
+ metaUrl := "redis://127.0.0.1:6379/10"
+ opt, err := redis.ParseURL(metaUrl)
+ if err != nil {
+ t.Fatalf("ParseURL: %v", err)
+ }
+ rdb := redis.NewClient(opt)
+ ctx := context.Background()
+ rdb.FlushDB(ctx)
+ defer rdb.FlushDB(ctx)
+ name := "test"
+ formatArgs := []string{"", "format", "--storage", "file", "--bucket", "/tmp/testMountDir", metaUrl, name}
+ err = Main(formatArgs)
+ if err != nil {
+ t.Fatalf("format error: %v", err)
+ }
+ body, err := rdb.Get(ctx, "setting").Bytes()
+ if err == redis.Nil {
+ t.Fatalf("database is not formatted")
+ }
+ if err != nil {
+ t.Fatalf("database is not formatted")
+ }
+ f := meta.Format{}
+ err = json.Unmarshal(body, &f)
+ if err != nil {
+ t.Fatalf("database formatted error: %v", err)
+ }
+
+ if f.Name != name {
+ t.Fatalf("database formatted error: %v", err)
+ }
+
+}
diff --git a/cmd/fsck.go b/cmd/fsck.go
new file mode 100644
index 0000000..3216514
--- /dev/null
+++ b/cmd/fsck.go
@@ -0,0 +1,164 @@
+/*
+ * JuiceFS, Copyright 2021 Juicedata, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package main
+
+import (
+ "fmt"
+ "sort"
+ "strings"
+ "time"
+
+ "github.com/juicedata/juicefs/pkg/chunk"
+ "github.com/juicedata/juicefs/pkg/meta"
+ "github.com/juicedata/juicefs/pkg/object"
+ osync "github.com/juicedata/juicefs/pkg/sync"
+ "github.com/juicedata/juicefs/pkg/utils"
+
+ "github.com/urfave/cli/v2"
+)
+
+func checkFlags() *cli.Command {
+ return &cli.Command{
+ Name: "fsck",
+ Usage: "Check consistency of file system",
+ ArgsUsage: "META-URL",
+ Action: fsck,
+ }
+}
+
+func fsck(ctx *cli.Context) error {
+ setLoggerLevel(ctx)
+ if ctx.Args().Len() < 1 {
+ return fmt.Errorf("META-URL is needed")
+ }
+ removePassword(ctx.Args().Get(0))
+ m := meta.NewClient(ctx.Args().Get(0), &meta.Config{Retries: 10, Strict: true})
+ format, err := m.Load()
+ if err != nil {
+ logger.Fatalf("load setting: %s", err)
+ }
+
+ chunkConf := chunk.Config{
+ BlockSize: format.BlockSize * 1024,
+ Compress: format.Compression,
+
+ GetTimeout: time.Second * 60,
+ PutTimeout: time.Second * 60,
+ MaxUpload: 20,
+ BufferSize: 300 << 20,
+ CacheDir: "memory",
+ }
+
+ blob, err := createStorage(format)
+ if err != nil {
+ logger.Fatalf("object storage: %s", err)
+ }
+ logger.Infof("Data use %s", blob)
+ blob = object.WithPrefix(blob, "chunks/")
+ objs, err := osync.ListAll(blob, "", "")
+ if err != nil {
+ logger.Fatalf("list all blocks: %s", err)
+ }
+
+ // Find all blocks in object storage
+ progress := utils.NewProgress(false, false)
+ blockDSpin := progress.AddDoubleSpinner("Found blocks")
+ var blocks = make(map[string]int64)
+ for obj := range objs {
+ if obj == nil {
+ break // failed listing
+ }
+ if obj.IsDir() {
+ continue
+ }
+
+ logger.Debugf("found block %s", obj.Key())
+ parts := strings.Split(obj.Key(), "/")
+ if len(parts) != 3 {
+ continue
+ }
+ name := parts[2]
+ blocks[name] = obj.Size()
+ blockDSpin.IncrInt64(obj.Size())
+ }
+ blockDSpin.Done()
+ if progress.Quiet {
+ c, b := blockDSpin.Current()
+ logger.Infof("Found %d blocks (%d bytes)", c, b)
+ }
+
+ // List all slices in metadata engine
+ sliceCSpin := progress.AddCountSpinner("Listed slices")
+ var c = meta.NewContext(0, 0, []uint32{0})
+ slices := make(map[meta.Ino][]meta.Slice)
+ r := m.ListSlices(c, slices, false, sliceCSpin.Increment)
+ if r != 0 {
+ logger.Fatalf("list all slices: %s", r)
+ }
+ sliceCSpin.Done()
+
+ // Scan all slices to find lost blocks
+ sliceCBar := progress.AddCountBar("Scanned slices", sliceCSpin.Current())
+ sliceBSpin := progress.AddByteSpinner("Scanned slices")
+ lostDSpin := progress.AddDoubleSpinner("Lost blocks")
+ brokens := make(map[meta.Ino]string)
+ for inode, ss := range slices {
+ for _, s := range ss {
+ n := (s.Size - 1) / uint32(chunkConf.BlockSize)
+ for i := uint32(0); i <= n; i++ {
+ sz := chunkConf.BlockSize
+ if i == n {
+ sz = int(s.Size) - int(i)*chunkConf.BlockSize
+ }
+ key := fmt.Sprintf("%d_%d_%d", s.Chunkid, i, sz)
+ if _, ok := blocks[key]; !ok {
+ if _, err := blob.Head(key); err != nil {
+ if _, ok := brokens[inode]; !ok {
+ if p, st := meta.GetPath(m, meta.Background, inode); st == 0 {
+ brokens[inode] = p
+ } else {
+ logger.Warnf("getpath of inode %d: %s", inode, st)
+ brokens[inode] = st.Error()
+ }
+ }
+ logger.Errorf("can't find block %s for file %s: %s", key, brokens[inode], err)
+ lostDSpin.IncrInt64(int64(sz))
+ }
+ }
+ }
+ sliceCBar.Increment()
+ sliceBSpin.IncrInt64(int64(s.Size))
+ }
+ }
+ progress.Done()
+ if progress.Quiet {
+ logger.Infof("Used by %d slices (%d bytes)", sliceCBar.Current(), sliceBSpin.Current())
+ }
+ if lc, lb := lostDSpin.Current(); lc > 0 {
+ msg := fmt.Sprintf("%d objects are lost (%d bytes), %d broken files:\n", lc, lb, len(brokens))
+ msg += fmt.Sprintf("%13s: PATH\n", "INODE")
+ var fileList []string
+ for i, p := range brokens {
+ fileList = append(fileList, fmt.Sprintf("%13d: %s", i, p))
+ }
+ sort.Strings(fileList)
+ msg += strings.Join(fileList, "\n")
+ logger.Fatal(msg)
+ }
+
+ return nil
+}
diff --git a/cmd/fsck_test.go b/cmd/fsck_test.go
new file mode 100644
index 0000000..114aa58
--- /dev/null
+++ b/cmd/fsck_test.go
@@ -0,0 +1,51 @@
+/*
+ * JuiceFS, Copyright 2021 Juicedata, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package main
+
+import (
+ "fmt"
+ "io/ioutil"
+ "testing"
+)
+
+func TestFsck(t *testing.T) {
+ metaUrl := "redis://127.0.0.1:6379/10"
+ mountpoint := "/tmp/testDir"
+ defer ResetRedis(metaUrl)
+ if err := MountTmp(metaUrl, mountpoint); err != nil {
+ t.Fatalf("mount failed: %v", err)
+ }
+ defer func(mountpoint string) {
+ err := UmountTmp(mountpoint)
+ if err != nil {
+ t.Fatalf("umount failed: %v", err)
+ }
+ }(mountpoint)
+ for i := 0; i < 10; i++ {
+ filename := fmt.Sprintf("%s/f%d.txt", mountpoint, i)
+ err := ioutil.WriteFile(filename, []byte("test"), 0644)
+ if err != nil {
+ t.Fatalf("mount failed: %v", err)
+ }
+ }
+
+ fsckArgs := []string{"", "fsck", metaUrl}
+ err := Main(fsckArgs)
+ if err != nil {
+ t.Fatalf("fsck failed: %v", err)
+ }
+}
diff --git a/cmd/gateway.go b/cmd/gateway.go
new file mode 100644
index 0000000..df2d1fb
--- /dev/null
+++ b/cmd/gateway.go
@@ -0,0 +1,269 @@
+//go:build !nogateway
+// +build !nogateway
+
+/*
+ * JuiceFS, Copyright 2020 Juicedata, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package main
+
+import (
+ "path/filepath"
+
+ "github.com/juicedata/juicefs/pkg/metric"
+
+ _ "net/http/pprof"
+ "os"
+ "strings"
+ "time"
+
+ "github.com/juicedata/juicefs/pkg/chunk"
+ jfsgateway "github.com/juicedata/juicefs/pkg/gateway"
+ "github.com/juicedata/juicefs/pkg/meta"
+ "github.com/juicedata/juicefs/pkg/usage"
+ "github.com/juicedata/juicefs/pkg/utils"
+ "github.com/juicedata/juicefs/pkg/version"
+ "github.com/juicedata/juicefs/pkg/vfs"
+ "github.com/urfave/cli/v2"
+
+ mcli "github.com/minio/cli"
+ minio "github.com/minio/minio/cmd"
+ "github.com/minio/minio/pkg/auth"
+)
+
+func gatewayFlags() *cli.Command {
+ flags := append(clientFlags(),
+ &cli.Float64Flag{
+ Name: "attr-cache",
+ Value: 1.0,
+ Usage: "attributes cache timeout in seconds",
+ },
+ &cli.Float64Flag{
+ Name: "entry-cache",
+ Value: 0,
+ Usage: "file entry cache timeout in seconds",
+ },
+ &cli.Float64Flag{
+ Name: "dir-entry-cache",
+ Value: 1.0,
+ Usage: "dir entry cache timeout in seconds",
+ },
+ &cli.StringFlag{
+ Name: "access-log",
+ Usage: "path for JuiceFS access log",
+ },
+ &cli.StringFlag{
+ Name: "metrics",
+ Value: "127.0.0.1:9567",
+ Usage: "address to export metrics",
+ },
+ &cli.StringFlag{
+ Name: "consul",
+ Value: "127.0.0.1:8500",
+ Usage: "consul address to register",
+ },
+ &cli.BoolFlag{
+ Name: "no-usage-report",
+ Usage: "do not send usage report",
+ },
+ &cli.BoolFlag{
+ Name: "no-banner",
+ Usage: "disable MinIO startup information",
+ },
+ &cli.BoolFlag{
+ Name: "multi-buckets",
+ Usage: "use top level of directories as buckets",
+ },
+ &cli.BoolFlag{
+ Name: "keep-etag",
+ Usage: "keep the ETag for uploaded objects",
+ })
+ return &cli.Command{
+ Name: "gateway",
+ Usage: "S3-compatible gateway",
+ ArgsUsage: "META-URL ADDRESS",
+ Flags: flags,
+ Action: gateway,
+ }
+}
+
+func gateway(c *cli.Context) error {
+ setLoggerLevel(c)
+
+ if c.Args().Len() < 2 {
+ logger.Fatalf("Meta URL and listen address are required")
+ }
+
+ ak := os.Getenv("MINIO_ROOT_USER")
+ if ak == "" {
+ ak = os.Getenv("MINIO_ACCESS_KEY")
+ }
+ if len(ak) < 3 {
+ logger.Fatalf("MINIO_ROOT_USER should be specified as an environment variable with at least 3 characters")
+ }
+ sk := os.Getenv("MINIO_ROOT_PASSWORD")
+ if sk == "" {
+ sk = os.Getenv("MINIO_SECRET_KEY")
+ }
+ if len(sk) < 8 {
+ logger.Fatalf("MINIO_ROOT_PASSWORD should be specified as an environment variable with at least 8 characters")
+ }
+
+ address := c.Args().Get(1)
+ gw = &GateWay{c}
+
+ args := []string{"gateway", "--address", address, "--anonymous"}
+ if c.Bool("no-banner") {
+ args = append(args, "--quiet")
+ }
+ app := &mcli.App{
+ Action: gateway2,
+ Flags: []mcli.Flag{
+ mcli.StringFlag{
+ Name: "address",
+ Value: ":9000",
+ Usage: "bind to a specific ADDRESS:PORT, ADDRESS can be an IP or hostname",
+ },
+ mcli.BoolFlag{
+ Name: "anonymous",
+ Usage: "hide sensitive information from logging",
+ },
+ mcli.BoolFlag{
+ Name: "json",
+ Usage: "output server logs and startup information in json format",
+ },
+ mcli.BoolFlag{
+ Name: "quiet",
+ Usage: "disable MinIO startup information",
+ },
+ },
+ }
+ return app.Run(args)
+}
+
+var gw *GateWay
+
+func gateway2(ctx *mcli.Context) error {
+ minio.StartGateway(ctx, gw)
+ return nil
+}
+
+type GateWay struct {
+ ctx *cli.Context
+}
+
+func (g *GateWay) Name() string {
+ return "JuiceFS"
+}
+
+func (g *GateWay) Production() bool {
+ return true
+}
+
+func (g *GateWay) NewGatewayLayer(creds auth.Credentials) (minio.ObjectLayer, error) {
+ c := g.ctx
+ addr := c.Args().Get(0)
+ removePassword(addr)
+ m := meta.NewClient(addr, &meta.Config{
+ Retries: 10,
+ Strict: true,
+ ReadOnly: c.Bool("read-only"),
+ OpenCache: time.Duration(c.Float64("open-cache") * 1e9),
+ MountPoint: "s3gateway",
+ Subdir: c.String("subdir"),
+ MaxDeletes: c.Int("max-deletes"),
+ })
+ format, err := m.Load()
+ if err != nil {
+ logger.Fatalf("load setting: %s", err)
+ }
+ wrapRegister("s3gateway", format.Name)
+
+ chunkConf := chunk.Config{
+ BlockSize: format.BlockSize * 1024,
+ Compress: format.Compression,
+
+ GetTimeout: time.Second * time.Duration(c.Int("get-timeout")),
+ PutTimeout: time.Second * time.Duration(c.Int("put-timeout")),
+ MaxUpload: c.Int("max-uploads"),
+ Writeback: c.Bool("writeback"),
+ Prefetch: c.Int("prefetch"),
+ BufferSize: c.Int("buffer-size") << 20,
+ UploadLimit: c.Int64("upload-limit") * 1e6 / 8,
+ DownloadLimit: c.Int64("download-limit") * 1e6 / 8,
+
+ CacheDir: c.String("cache-dir"),
+ CacheSize: int64(c.Int("cache-size")),
+ FreeSpace: float32(c.Float64("free-space-ratio")),
+ CacheMode: os.FileMode(0600),
+ CacheFullBlock: !c.Bool("cache-partial-only"),
+ AutoCreate: true,
+ }
+ if chunkConf.CacheDir != "memory" {
+ ds := utils.SplitDir(chunkConf.CacheDir)
+ for i := range ds {
+ ds[i] = filepath.Join(ds[i], format.UUID)
+ }
+ chunkConf.CacheDir = strings.Join(ds, string(os.PathListSeparator))
+ }
+ if c.IsSet("bucket") {
+ format.Bucket = c.String("bucket")
+ }
+ blob, err := createStorage(format)
+ if err != nil {
+ logger.Fatalf("object storage: %s", err)
+ }
+ logger.Infof("Data use %s", blob)
+
+ store := chunk.NewCachedStore(blob, chunkConf)
+ m.OnMsg(meta.DeleteChunk, func(args ...interface{}) error {
+ chunkid := args[0].(uint64)
+ length := args[1].(uint32)
+ return store.Remove(chunkid, int(length))
+ })
+ m.OnMsg(meta.CompactChunk, func(args ...interface{}) error {
+ slices := args[0].([]meta.Slice)
+ chunkid := args[1].(uint64)
+ return vfs.Compact(chunkConf, store, slices, chunkid)
+ })
+ err = m.NewSession()
+ if err != nil {
+ logger.Fatalf("new session: %s", err)
+ }
+
+ conf := &vfs.Config{
+ Meta: &meta.Config{
+ Retries: 10,
+ },
+ Format: format,
+ Version: version.Version(),
+ AttrTimeout: time.Millisecond * time.Duration(c.Float64("attr-cache")*1000),
+ EntryTimeout: time.Millisecond * time.Duration(c.Float64("entry-cache")*1000),
+ DirEntryTimeout: time.Millisecond * time.Duration(c.Float64("dir-entry-cache")*1000),
+ AccessLog: c.String("access-log"),
+ Chunk: &chunkConf,
+ }
+
+ metricsAddr := exposeMetrics(m, c)
+ if c.IsSet("consul") {
+ metric.RegisterToConsul(c.String("consul"), metricsAddr, "s3gateway")
+ }
+ if d := c.Duration("backup-meta"); d > 0 {
+ go vfs.Backup(m, blob, d)
+ }
+ if !c.Bool("no-usage-report") {
+ go usage.ReportUsage(m, "gateway "+version.Version())
+ }
+ return jfsgateway.NewJFSGateway(conf, m, store, c.Bool("multi-buckets"), c.Bool("keep-etag"))
+}
diff --git a/cmd/gateway_noop.go b/cmd/gateway_noop.go
new file mode 100644
index 0000000..da14666
--- /dev/null
+++ b/cmd/gateway_noop.go
@@ -0,0 +1,35 @@
+//go:build nogateway
+// +build nogateway
+
+/*
+ * JuiceFS, Copyright 2020 Juicedata, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package main
+
+import (
+ "errors"
+
+ "github.com/urfave/cli/v2"
+)
+
+func gatewayFlags() *cli.Command {
+ return &cli.Command{
+ Name: "gateway",
+ Usage: "S3-compatible gateway (not included)",
+ Action: func(*cli.Context) error {
+ return errors.New("not supported")
+ },
+ }
+}
diff --git a/cmd/gc.go b/cmd/gc.go
new file mode 100644
index 0000000..04f0076
--- /dev/null
+++ b/cmd/gc.go
@@ -0,0 +1,297 @@
+/*
+ * JuiceFS, Copyright 2021 Juicedata, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package main
+
+import (
+ "fmt"
+ "os"
+ "strconv"
+ "strings"
+ "sync"
+ "time"
+
+ "github.com/juicedata/juicefs/pkg/chunk"
+ "github.com/juicedata/juicefs/pkg/meta"
+ "github.com/juicedata/juicefs/pkg/object"
+ osync "github.com/juicedata/juicefs/pkg/sync"
+ "github.com/juicedata/juicefs/pkg/utils"
+ "github.com/juicedata/juicefs/pkg/vfs"
+
+ "github.com/urfave/cli/v2"
+)
+
+func gcFlags() *cli.Command {
+ return &cli.Command{
+ Name: "gc",
+ Usage: "collect any leaked objects",
+ ArgsUsage: "META-URL",
+ Action: gc,
+ Flags: []cli.Flag{
+ &cli.BoolFlag{
+ Name: "delete",
+ Usage: "deleted leaked objects",
+ },
+ &cli.BoolFlag{
+ Name: "compact",
+ Usage: "compact small slices into bigger ones",
+ },
+ &cli.IntFlag{
+ Name: "threads",
+ Value: 10,
+ Usage: "number threads to delete leaked objects",
+ },
+ },
+ }
+}
+
+type dChunk struct {
+ chunkid uint64
+ length uint32
+}
+
+func gc(ctx *cli.Context) error {
+ setLoggerLevel(ctx)
+ if ctx.Args().Len() < 1 {
+ return fmt.Errorf("META-URL is needed")
+ }
+ removePassword(ctx.Args().Get(0))
+ m := meta.NewClient(ctx.Args().Get(0), &meta.Config{
+ Retries: 10,
+ Strict: true,
+ MaxDeletes: ctx.Int("threads"),
+ })
+ format, err := m.Load()
+ if err != nil {
+ logger.Fatalf("load setting: %s", err)
+ }
+
+ chunkConf := chunk.Config{
+ BlockSize: format.BlockSize * 1024,
+ Compress: format.Compression,
+
+ GetTimeout: time.Second * 60,
+ PutTimeout: time.Second * 60,
+ MaxUpload: 20,
+ BufferSize: 300 << 20,
+ CacheDir: "memory",
+ }
+
+ blob, err := createStorage(format)
+ if err != nil {
+ logger.Fatalf("object storage: %s", err)
+ }
+ logger.Infof("Data use %s", blob)
+ store := chunk.NewCachedStore(blob, chunkConf)
+
+ // Scan all chunks first and do compaction if necessary
+ progress := utils.NewProgress(false, false)
+ if ctx.Bool("compact") {
+ bar := progress.AddCountBar("Scanned chunks", 0)
+ spin := progress.AddDoubleSpinner("Compacted slices")
+ m.OnMsg(meta.DeleteChunk, func(args ...interface{}) error {
+ return store.Remove(args[0].(uint64), int(args[1].(uint32)))
+ })
+ m.OnMsg(meta.CompactChunk, func(args ...interface{}) error {
+ slices := args[0].([]meta.Slice)
+ err := vfs.Compact(chunkConf, store, slices, args[1].(uint64))
+ for _, s := range slices {
+ spin.IncrInt64(int64(s.Len))
+ }
+ return err
+ })
+ if st := m.CompactAll(meta.Background, bar); st == 0 {
+ bar.Done()
+ spin.Done()
+ if progress.Quiet {
+ c, b := spin.Current()
+ logger.Infof("Compacted %d chunks (%d slices, %d bytes).", bar.Current(), c, b)
+ }
+ } else {
+ logger.Errorf("compact all chunks: %s", st)
+ }
+ } else {
+ m.OnMsg(meta.CompactChunk, func(args ...interface{}) error {
+ return nil // ignore compaction
+ })
+ }
+
+ // put it above delete count spinner
+ sliceCSpin := progress.AddCountSpinner("Listed slices")
+
+ // Delete pending chunks while listing slices
+ delete := ctx.Bool("delete")
+ var delSpin *utils.Bar
+ var chunkChan chan *dChunk // pending delete chunks
+ var wg sync.WaitGroup
+ if delete {
+ delSpin = progress.AddCountSpinner("Deleted pending")
+ chunkChan = make(chan *dChunk, 10240)
+ m.OnMsg(meta.DeleteChunk, func(args ...interface{}) error {
+ delSpin.Increment()
+ chunkChan <- &dChunk{args[0].(uint64), args[1].(uint32)}
+ return nil
+ })
+ for i := 0; i < ctx.Int("threads"); i++ {
+ wg.Add(1)
+ go func() {
+ defer wg.Done()
+ for c := range chunkChan {
+ if err := store.Remove(c.chunkid, int(c.length)); err != nil {
+ logger.Warnf("remove %d_%d: %s", c.chunkid, c.length, err)
+ }
+ }
+ }()
+ }
+ }
+
+ // List all slices in metadata engine
+ var c = meta.NewContext(0, 0, []uint32{0})
+ slices := make(map[meta.Ino][]meta.Slice)
+ r := m.ListSlices(c, slices, delete, sliceCSpin.Increment)
+ if r != 0 {
+ logger.Fatalf("list all slices: %s", r)
+ }
+ if delete {
+ close(chunkChan)
+ wg.Wait()
+ delSpin.Done()
+ if progress.Quiet {
+ logger.Infof("Deleted %d pending chunks", delSpin.Current())
+ }
+ }
+ sliceCSpin.Done()
+
+ // Scan all objects to find leaked ones
+ blob = object.WithPrefix(blob, "chunks/")
+ objs, err := osync.ListAll(blob, "", "")
+ if err != nil {
+ logger.Fatalf("list all blocks: %s", err)
+ }
+ keys := make(map[uint64]uint32)
+ var total int64
+ var totalBytes uint64
+ for _, ss := range slices {
+ for _, s := range ss {
+ keys[s.Chunkid] = s.Size
+ total += int64(int(s.Size-1)/chunkConf.BlockSize) + 1 // s.Size should be > 0
+ totalBytes += uint64(s.Size)
+ }
+ }
+ if progress.Quiet {
+ logger.Infof("using %d slices (%d bytes)", len(keys), totalBytes)
+ }
+
+ bar := progress.AddCountBar("Scanned objects", total)
+ valid := progress.AddDoubleSpinner("Valid objects")
+ leaked := progress.AddDoubleSpinner("Leaked objects")
+ skipped := progress.AddDoubleSpinner("Skipped objects")
+ maxMtime := time.Now().Add(time.Hour * -1)
+ strDuration := os.Getenv("JFS_GC_SKIPPEDTIME")
+ if strDuration != "" {
+ iDuration, err := strconv.Atoi(strDuration)
+ if err == nil {
+ maxMtime = time.Now().Add(time.Second * -1 * time.Duration(iDuration))
+ } else {
+ logger.Errorf("parse JFS_GC_SKIPPEDTIME=%s: %s", strDuration, err)
+ }
+ }
+
+ var leakedObj = make(chan string, 10240)
+ for i := 0; i < ctx.Int("threads"); i++ {
+ wg.Add(1)
+ go func() {
+ defer wg.Done()
+ for key := range leakedObj {
+ if err := blob.Delete(key); err != nil {
+ logger.Warnf("delete %s: %s", key, err)
+ }
+ }
+ }()
+ }
+
+ foundLeaked := func(obj object.Object) {
+ bar.IncrTotal(1)
+ leaked.IncrInt64(obj.Size())
+ if delete {
+ leakedObj <- obj.Key()
+ }
+ }
+
+ for obj := range objs {
+ if obj == nil {
+ break // failed listing
+ }
+ if obj.IsDir() {
+ continue
+ }
+ if obj.Mtime().After(maxMtime) || obj.Mtime().Unix() == 0 {
+ logger.Debugf("ignore new block: %s %s", obj.Key(), obj.Mtime())
+ bar.Increment()
+ skipped.IncrInt64(obj.Size())
+ continue
+ }
+
+ logger.Debugf("found block %s", obj.Key())
+ parts := strings.Split(obj.Key(), "/")
+ if len(parts) != 3 {
+ continue
+ }
+ name := parts[2]
+ parts = strings.Split(name, "_")
+ if len(parts) != 3 {
+ continue
+ }
+ bar.Increment()
+ cid, _ := strconv.Atoi(parts[0])
+ size := keys[uint64(cid)]
+ if size == 0 {
+ logger.Debugf("find leaked object: %s, size: %d", obj.Key(), obj.Size())
+ foundLeaked(obj)
+ continue
+ }
+ indx, _ := strconv.Atoi(parts[1])
+ csize, _ := strconv.Atoi(parts[2])
+ if csize == chunkConf.BlockSize {
+ if (indx+1)*csize > int(size) {
+ logger.Warnf("size of slice %d is larger than expected: %d > %d", cid, indx*chunkConf.BlockSize+csize, size)
+ foundLeaked(obj)
+ } else {
+ valid.IncrInt64(obj.Size())
+ }
+ } else {
+ if indx*chunkConf.BlockSize+csize != int(size) {
+ logger.Warnf("size of slice %d is %d, but expect %d", cid, indx*chunkConf.BlockSize+csize, size)
+ foundLeaked(obj)
+ } else {
+ valid.IncrInt64(obj.Size())
+ }
+ }
+ }
+ close(leakedObj)
+ wg.Wait()
+ progress.Done()
+
+ vc, _ := valid.Current()
+ lc, lb := leaked.Current()
+ sc, sb := skipped.Current()
+ logger.Infof("scanned %d objects, %d valid, %d leaked (%d bytes), %d skipped (%d bytes)",
+ bar.Current(), vc, lc, lb, sc, sb)
+ if lc > 0 && !delete {
+ logger.Infof("Please add `--delete` to clean leaked objects")
+ }
+ return nil
+}
diff --git a/cmd/gc_test.go b/cmd/gc_test.go
new file mode 100644
index 0000000..10fd9f8
--- /dev/null
+++ b/cmd/gc_test.go
@@ -0,0 +1,112 @@
+/*
+ * JuiceFS, Copyright 2021 Juicedata, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package main
+
+import (
+ "fmt"
+ "io/ioutil"
+ "os"
+ "path"
+ "strings"
+ "testing"
+ "time"
+)
+
+func WriteLeakedData(dataDir string) {
+ var templateContent = "aaaaaaaabbbbbbbb"
+ var writeContent strings.Builder
+ for i := 0; i < 64*1024; i++ {
+ writeContent.Write([]byte(templateContent))
+ }
+ ioutil.WriteFile(dataDir+"chunks/0/0/"+"123456789_0_1048576", []byte(writeContent.String()), 0644)
+}
+
+func CheckLeakedData(dataDir string) bool {
+ _,err := os.Stat(dataDir+"chunks/0/0/"+"123456789_0_1048576")
+ if err != nil {
+ return true
+ }
+ return false
+}
+
+func RemoveAllFiles(dataDir string) {
+ _, err := os.Stat(dataDir)
+ if err == nil {
+ files, err := ioutil.ReadDir(dataDir)
+ if err == nil {
+ for _, f := range files {
+ os.RemoveAll(path.Join([]string{dataDir, f.Name()}...))
+ }
+ }
+ }
+}
+
+func TestGc(t *testing.T) {
+ metaUrl := "redis://127.0.0.1:6379/10"
+ mountpoint := "/tmp/testDir"
+ dataDir := "/tmp/testMountDir/test/"
+ RemoveAllFiles(dataDir + "chunks/")
+ defer ResetRedis(metaUrl)
+ if err := MountTmp(metaUrl, mountpoint); err != nil {
+ t.Fatalf("mount failed: %v", err)
+ }
+ defer func(mountpoint string) {
+ err := UmountTmp(mountpoint)
+ if err != nil {
+ t.Fatalf("umount failed: %v", err)
+ }
+ }(mountpoint)
+ for i := 0; i < 10; i++ {
+ filename := fmt.Sprintf("%s/f%d.txt", mountpoint, i)
+ err := ioutil.WriteFile(filename, []byte("test"), 0644)
+ if err != nil {
+ t.Fatalf("mount failed: %v", err)
+ }
+ }
+
+ strEnvSkippedTime := os.Getenv("JFS_GC_SKIPPEDTIME")
+ t.Logf("JFS_GC_SKIPPEDTIME is %s", strEnvSkippedTime)
+
+ WriteLeakedData(dataDir)
+ time.Sleep(time.Duration(3) * time.Second)
+
+ gcArgs := []string{
+ "",
+ "gc",
+ "--delete",
+ metaUrl,
+ }
+ err := Main(gcArgs)
+ if err != nil {
+ t.Fatalf("gc failed: %v", err)
+ }
+
+ bNotExist := CheckLeakedData(dataDir)
+ if bNotExist == false {
+ t.Fatalf("gc delete failed,leaked data was not deleted")
+ }
+
+ gcArgs = []string{
+ "",
+ "gc",
+ metaUrl,
+ }
+ err = Main(gcArgs)
+ if err != nil {
+ t.Fatalf("gc failed: %v", err)
+ }
+}
diff --git a/cmd/info.go b/cmd/info.go
new file mode 100644
index 0000000..d0df663
--- /dev/null
+++ b/cmd/info.go
@@ -0,0 +1,123 @@
+/*
+ * JuiceFS, Copyright 2020 Juicedata, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package main
+
+import (
+ "fmt"
+ "os"
+ "path/filepath"
+ "runtime"
+ "strconv"
+ "syscall"
+
+ "github.com/juicedata/juicefs/pkg/meta"
+ "github.com/juicedata/juicefs/pkg/utils"
+ "github.com/urfave/cli/v2"
+)
+
+func infoFlags() *cli.Command {
+ return &cli.Command{
+ Name: "info",
+ Usage: "show internal information for paths or inodes",
+ ArgsUsage: "PATH or INODE",
+ Action: info,
+ Flags: []cli.Flag{
+ &cli.BoolFlag{
+ Name: "inode",
+ Aliases: []string{"i"},
+ Usage: "use inode instead of path (current dir should be inside JuiceFS)",
+ },
+ &cli.BoolFlag{
+ Name: "recursive",
+ Aliases: []string{"r"},
+ Usage: "get summary of directories recursively (NOTE: it may take a long time for huge trees)",
+ },
+ },
+ }
+}
+
+func info(ctx *cli.Context) error {
+ if runtime.GOOS == "windows" {
+ logger.Infof("Windows is not supported")
+ return nil
+ }
+ if ctx.Args().Len() < 1 {
+ logger.Infof("DIR or FILE is needed")
+ return nil
+ }
+ var recursive uint8
+ if ctx.Bool("recursive") {
+ recursive = 1
+ }
+ for i := 0; i < ctx.Args().Len(); i++ {
+ path := ctx.Args().Get(i)
+ var d string
+ var inode uint64
+ var err error
+ if ctx.Bool("inode") {
+ inode, err = strconv.ParseUint(path, 10, 64)
+ d, _ = os.Getwd()
+ } else {
+ d, err = filepath.Abs(path)
+ if err != nil {
+ logger.Fatalf("abs of %s: %s", path, err)
+ }
+ inode, err = utils.GetFileInode(d)
+ }
+ if err != nil {
+ logger.Errorf("lookup inode for %s: %s", path, err)
+ continue
+ }
+
+ f := openController(d)
+ if f == nil {
+ logger.Errorf("%s is not inside JuiceFS", path)
+ continue
+ }
+
+ wb := utils.NewBuffer(8 + 9)
+ wb.Put32(meta.Info)
+ wb.Put32(9)
+ wb.Put64(inode)
+ wb.Put8(recursive)
+ _, err = f.Write(wb.Bytes())
+ if err != nil {
+ logger.Fatalf("write message: %s", err)
+ }
+
+ data := make([]byte, 4)
+ n, err := f.Read(data)
+ if err != nil {
+ logger.Fatalf("read size: %d %s", n, err)
+ }
+ if n == 1 && data[0] == byte(syscall.EINVAL&0xff) {
+ logger.Fatalf("info is not supported, please upgrade and mount again")
+ }
+ r := utils.ReadBuffer(data)
+ size := r.Get32()
+ data = make([]byte, size)
+ n, err = f.Read(data)
+ if err != nil {
+ logger.Fatalf("read info: %s", err)
+ }
+ fmt.Println(path, ":")
+ fmt.Println(string(data[:n]))
+ _ = f.Close()
+ }
+
+ return nil
+}
diff --git a/cmd/info_test.go b/cmd/info_test.go
new file mode 100644
index 0000000..9862d31
--- /dev/null
+++ b/cmd/info_test.go
@@ -0,0 +1,89 @@
+/*
+ * JuiceFS, Copyright 2021 Juicedata, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package main
+
+import (
+ "fmt"
+ "io/ioutil"
+ "os"
+ "strings"
+ "testing"
+
+ "github.com/agiledragon/gomonkey/v2"
+ . "github.com/smartystreets/goconvey/convey"
+)
+
+func TestInfo(t *testing.T) {
+ Convey("TestInfo", t, func() {
+ Convey("TestInfo", func() {
+
+ var res string
+ tmpFile, err := os.CreateTemp("/tmp", "")
+ if err != nil {
+ t.Fatalf("creat tmp file failed: %v", err)
+ }
+ defer os.Remove(tmpFile.Name())
+ if err != nil {
+ t.Fatalf("create temporary file: %v", err)
+ }
+ // mock os.Stdout
+ patches := gomonkey.ApplyGlobalVar(os.Stdout, *tmpFile)
+ defer patches.Reset()
+
+ metaUrl := "redis://127.0.0.1:6379/10"
+ mountpoint := "/tmp/testDir"
+ defer ResetRedis(metaUrl)
+ if err := MountTmp(metaUrl, mountpoint); err != nil {
+ t.Fatalf("mount failed: %v", err)
+ }
+ defer func(mountpoint string) {
+ err := UmountTmp(mountpoint)
+ if err != nil {
+ t.Fatalf("umount failed: %v", err)
+ }
+ }(mountpoint)
+
+ err = os.MkdirAll(fmt.Sprintf("%s/dir1", mountpoint), 0777)
+ if err != nil {
+ t.Fatalf("mount failed: %v", err)
+ }
+ for i := 0; i < 10; i++ {
+ filename := fmt.Sprintf("%s/dir1/f%d.txt", mountpoint, i)
+ err := ioutil.WriteFile(filename, []byte("test"), 0644)
+ if err != nil {
+ t.Fatalf("mount failed: %v", err)
+ }
+ }
+
+ infoArgs := []string{"", "info", fmt.Sprintf("%s/dir1", mountpoint)}
+ err = Main(infoArgs)
+ if err != nil {
+ t.Fatalf("info failed: %v", err)
+ }
+ content, err := ioutil.ReadFile(tmpFile.Name())
+ if err != nil {
+ t.Fatalf("readFile failed: %v", err)
+ }
+ res = string(content)
+ var answer = `/tmp/testDir/dir1: inode: 2 files: 10 dirs: 1 length: 40 size: 45056`
+ replacer := strings.NewReplacer("\n", "", " ", "")
+ res = replacer.Replace(res)
+ answer = replacer.Replace(answer)
+ So(res, ShouldEqual, answer)
+ })
+ })
+}
diff --git a/cmd/load.go b/cmd/load.go
new file mode 100644
index 0000000..2be44a3
--- /dev/null
+++ b/cmd/load.go
@@ -0,0 +1,60 @@
+/*
+ * JuiceFS, Copyright 2021 Juicedata, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package main
+
+import (
+ "fmt"
+ "io"
+ "os"
+
+ "github.com/juicedata/juicefs/pkg/meta"
+ "github.com/urfave/cli/v2"
+)
+
+func load(ctx *cli.Context) error {
+ setLoggerLevel(ctx)
+ if ctx.Args().Len() < 1 {
+ return fmt.Errorf("META-URL is needed")
+ }
+ var fp io.ReadCloser
+ if ctx.Args().Len() == 1 {
+ fp = os.Stdin
+ } else {
+ var err error
+ fp, err = os.Open(ctx.Args().Get(1))
+ if err != nil {
+ return err
+ }
+ defer fp.Close()
+ }
+ removePassword(ctx.Args().Get(0))
+ m := meta.NewClient(ctx.Args().Get(0), &meta.Config{Retries: 10, Strict: true})
+ if err := m.LoadMeta(fp); err != nil {
+ return err
+ }
+ logger.Infof("Load metadata from %s succeed", ctx.Args().Get(1))
+ return nil
+}
+
+func loadFlags() *cli.Command {
+ return &cli.Command{
+ Name: "load",
+ Usage: "load metadata from a previously dumped JSON file",
+ ArgsUsage: "META-URL [FILE]",
+ Action: load,
+ }
+}
diff --git a/cmd/main.go b/cmd/main.go
new file mode 100644
index 0000000..4f680bd
--- /dev/null
+++ b/cmd/main.go
@@ -0,0 +1,311 @@
+/*
+ * JuiceFS, Copyright 2020 Juicedata, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package main
+
+import (
+ "fmt"
+ "log"
+ "net/http"
+ _ "net/http/pprof"
+ "os"
+ "strings"
+
+ "github.com/erikdubbelboer/gspt"
+ "github.com/google/gops/agent"
+ "github.com/sirupsen/logrus"
+
+ "github.com/juicedata/juicefs/pkg/utils"
+ "github.com/juicedata/juicefs/pkg/version"
+ "github.com/urfave/cli/v2"
+)
+
+var logger = utils.GetLogger("juicefs")
+
+func globalFlags() []cli.Flag {
+ return []cli.Flag{
+ &cli.BoolFlag{
+ Name: "verbose",
+ Aliases: []string{"debug", "v"},
+ Usage: "enable debug log",
+ },
+ &cli.BoolFlag{
+ Name: "quiet",
+ Aliases: []string{"q"},
+ Usage: "only warning and errors",
+ },
+ &cli.BoolFlag{
+ Name: "trace",
+ Usage: "enable trace log",
+ },
+ &cli.BoolFlag{
+ Name: "no-agent",
+ Usage: "Disable pprof (:6060) and gops (:6070) agent",
+ },
+ &cli.BoolFlag{
+ Name: "no-color",
+ Usage: "disable colors",
+ },
+ }
+}
+
+func Main(args []string) error {
+ cli.VersionFlag = &cli.BoolFlag{
+ Name: "version", Aliases: []string{"V"},
+ Usage: "print only the version",
+ }
+ app := &cli.App{
+ Name: "juicefs",
+ Usage: "A POSIX file system built on Redis and object storage.",
+ Version: version.Version(),
+ Copyright: "Apache License 2.0",
+ EnableBashCompletion: true,
+ Flags: globalFlags(),
+ Commands: []*cli.Command{
+ formatFlags(),
+ mountFlags(),
+ umountFlags(),
+ gatewayFlags(),
+ syncFlags(),
+ rmrFlags(),
+ infoFlags(),
+ benchFlags(),
+ gcFlags(),
+ checkFlags(),
+ profileFlags(),
+ statsFlags(),
+ statusFlags(),
+ warmupFlags(),
+ dumpFlags(),
+ loadFlags(),
+ configFlags(),
+ destroyFlags(),
+ },
+ }
+
+ // Called via mount or fstab.
+ if strings.HasSuffix(args[0], "/mount.juicefs") {
+ if newArgs, err := handleSysMountArgs(args); err != nil {
+ log.Fatal(err)
+ } else {
+ args = newArgs
+ }
+ }
+
+ return app.Run(reorderOptions(app, args))
+
+}
+
+func main() {
+ err := Main(os.Args)
+ if err != nil {
+ log.Fatal(err)
+ }
+}
+
+func handleSysMountArgs(args []string) ([]string, error) {
+ optionToCmdFlag := map[string]string{
+ "attrcacheto": "attr-cache",
+ "entrycacheto": "entry-cache",
+ "direntrycacheto": "dir-entry-cache",
+ }
+ newArgs := []string{"juicefs", "mount", "-d"}
+ mountOptions := args[3:]
+ sysOptions := []string{"_netdev", "rw", "defaults", "remount"}
+ fuseOptions := make([]string, 0, 20)
+ cmdFlagsLookup := make(map[string]bool, 20)
+ for _, f := range append(mountFlags().Flags, globalFlags()...) {
+ if names := f.Names(); len(names) > 0 && len(names[0]) > 1 {
+ _, cmdFlagsLookup[names[0]] = f.(*cli.BoolFlag)
+ }
+ }
+
+ parseFlag := false
+ for _, option := range mountOptions {
+ if option == "-o" {
+ parseFlag = true
+ continue
+ }
+ if !parseFlag {
+ continue
+ }
+
+ opts := strings.Split(option, ",")
+ for _, opt := range opts {
+ opt = strings.TrimSpace(opt)
+ if opt == "" || stringContains(sysOptions, opt) {
+ continue
+ }
+ // Lower case option name is preferred, but if it's the same as flag name, we also accept it
+ if strings.Contains(opt, "=") {
+ fields := strings.SplitN(opt, "=", 2)
+ if flagName, ok := optionToCmdFlag[fields[0]]; ok {
+ newArgs = append(newArgs, fmt.Sprintf("--%s=%s", flagName, fields[1]))
+ } else if isBool, ok := cmdFlagsLookup[fields[0]]; ok && !isBool {
+ newArgs = append(newArgs, fmt.Sprintf("--%s=%s", fields[0], fields[1]))
+ } else {
+ fuseOptions = append(fuseOptions, opt)
+ }
+ } else if flagName, ok := optionToCmdFlag[opt]; ok {
+ newArgs = append(newArgs, fmt.Sprintf("--%s", flagName))
+ } else if isBool, ok := cmdFlagsLookup[opt]; ok && isBool {
+ newArgs = append(newArgs, fmt.Sprintf("--%s", opt))
+ if opt == "debug" {
+ fuseOptions = append(fuseOptions, opt)
+ }
+ } else {
+ fuseOptions = append(fuseOptions, opt)
+ }
+ }
+
+ parseFlag = false
+ }
+ if len(fuseOptions) > 0 {
+ newArgs = append(newArgs, "-o", strings.Join(fuseOptions, ","))
+ }
+ newArgs = append(newArgs, args[1], args[2])
+ logger.Debug("Parsed mount args: ", strings.Join(newArgs, " "))
+ return newArgs, nil
+}
+
+func stringContains(s []string, e string) bool {
+ for _, item := range s {
+ if item == e {
+ return true
+ }
+ }
+ return false
+}
+
+func isFlag(flags []cli.Flag, option string) (bool, bool) {
+ if !strings.HasPrefix(option, "-") {
+ return false, false
+ }
+ // --V or -v work the same
+ option = strings.TrimLeft(option, "-")
+ for _, flag := range flags {
+ _, isBool := flag.(*cli.BoolFlag)
+ for _, name := range flag.Names() {
+ if option == name || strings.HasPrefix(option, name+"=") {
+ return true, !isBool && !strings.Contains(option, "=")
+ }
+ }
+ }
+ return false, false
+}
+
+func reorderOptions(app *cli.App, args []string) []string {
+ var newArgs = []string{args[0]}
+ var others []string
+ globalFlags := append(app.Flags, cli.VersionFlag)
+ for i := 1; i < len(args); i++ {
+ option := args[i]
+ if ok, hasValue := isFlag(globalFlags, option); ok {
+ newArgs = append(newArgs, option)
+ if hasValue {
+ i++
+ newArgs = append(newArgs, args[i])
+ }
+ } else {
+ others = append(others, option)
+ }
+ }
+ // no command
+ if len(others) == 0 {
+ return newArgs
+ }
+ cmdName := others[0]
+ var cmd *cli.Command
+ for _, c := range app.Commands {
+ if c.Name == cmdName {
+ cmd = c
+ }
+ }
+ if cmd == nil {
+ // can't recognize the command, skip it
+ return append(newArgs, others...)
+ }
+
+ newArgs = append(newArgs, cmdName)
+ args, others = others[1:], nil
+ // -h is valid for all the commands
+ cmdFlags := append(cmd.Flags, cli.HelpFlag)
+ for i := 0; i < len(args); i++ {
+ option := args[i]
+ if ok, hasValue := isFlag(cmdFlags, option); ok {
+ newArgs = append(newArgs, option)
+ if hasValue {
+ i++
+ newArgs = append(newArgs, args[i])
+ }
+ } else {
+ if strings.HasPrefix(option, "-") && !stringContains(args, "--generate-bash-completion") {
+ logger.Fatalf("unknown option: %s", option)
+ }
+ others = append(others, option)
+ }
+ }
+ return append(newArgs, others...)
+}
+
+func setupAgent(c *cli.Context) {
+ if !c.Bool("no-agent") {
+ go func() {
+ for port := 6060; port < 6100; port++ {
+ _ = http.ListenAndServe(fmt.Sprintf("127.0.0.1:%d", port), nil)
+ }
+ }()
+ go func() {
+ for port := 6070; port < 6100; port++ {
+ _ = agent.Listen(agent.Options{Addr: fmt.Sprintf("127.0.0.1:%d", port)})
+ }
+ }()
+ }
+}
+
+func setLoggerLevel(c *cli.Context) {
+ if c.Bool("trace") {
+ utils.SetLogLevel(logrus.TraceLevel)
+ } else if c.Bool("verbose") {
+ utils.SetLogLevel(logrus.DebugLevel)
+ } else if c.Bool("quiet") {
+ utils.SetLogLevel(logrus.WarnLevel)
+ } else {
+ utils.SetLogLevel(logrus.InfoLevel)
+ }
+ if c.Bool("no-color") {
+ utils.DisableLogColor()
+ }
+ setupAgent(c)
+}
+
+func removePassword(uri string) {
+ var uri2 string
+ if strings.Contains(uri, "://") {
+ uri2 = utils.RemovePassword(uri)
+ } else {
+ uri2 = utils.RemovePassword("redis://" + uri)
+ }
+ if uri2 != uri {
+ for i, a := range os.Args {
+ if a == uri {
+ os.Args[i] = uri2
+ break
+ }
+ }
+ }
+ gspt.SetProcTitle(strings.Join(os.Args, " "))
+}
diff --git a/cmd/main_test.go b/cmd/main_test.go
new file mode 100644
index 0000000..60cdbc3
--- /dev/null
+++ b/cmd/main_test.go
@@ -0,0 +1,62 @@
+/*
+ * JuiceFS, Copyright 2020 Juicedata, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package main
+
+import (
+ "reflect"
+ "testing"
+
+ "github.com/urfave/cli/v2"
+)
+
+func TestArgsOrder(t *testing.T) {
+ var app = &cli.App{
+ Flags: []cli.Flag{
+ &cli.BoolFlag{
+ Name: "verbose",
+ Aliases: []string{"v"},
+ },
+ &cli.Int64Flag{
+ Name: "key",
+ Aliases: []string{"k"},
+ },
+ },
+ Commands: []*cli.Command{
+ {
+ Name: "cmd",
+ Flags: []cli.Flag{
+ &cli.Int64Flag{
+ Name: "k2",
+ },
+ },
+ },
+ },
+ }
+
+ var cases = [][]string{
+ {"test", "cmd", "a", "-k2", "v2", "b", "--v"},
+ {"test", "--v", "cmd", "-k2", "v2", "a", "b"},
+ {"test", "cmd", "a", "-k2=v", "--h"},
+ {"test", "cmd", "-k2=v", "--h", "a"},
+ }
+ for i := 0; i < len(cases); i += 2 {
+ oreded := reorderOptions(app, cases[i])
+ if !reflect.DeepEqual(cases[i+1], oreded) {
+ t.Fatalf("expecte %v, but got %v", cases[i+1], oreded)
+ }
+ }
+}
diff --git a/cmd/mount.go b/cmd/mount.go
new file mode 100644
index 0000000..4993c9e
--- /dev/null
+++ b/cmd/mount.go
@@ -0,0 +1,441 @@
+/*
+ * JuiceFS, Copyright 2020 Juicedata, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package main
+
+import (
+ "net"
+ "net/http"
+ _ "net/http/pprof"
+ "os"
+ "os/signal"
+ "path"
+ "path/filepath"
+ "runtime"
+ "strings"
+ "syscall"
+ "time"
+
+ "github.com/prometheus/client_golang/prometheus"
+ "github.com/prometheus/client_golang/prometheus/promhttp"
+ "github.com/urfave/cli/v2"
+
+ "github.com/juicedata/juicefs/pkg/chunk"
+ "github.com/juicedata/juicefs/pkg/meta"
+ "github.com/juicedata/juicefs/pkg/metric"
+ "github.com/juicedata/juicefs/pkg/usage"
+ "github.com/juicedata/juicefs/pkg/utils"
+ "github.com/juicedata/juicefs/pkg/version"
+ "github.com/juicedata/juicefs/pkg/vfs"
+)
+
+func installHandler(mp string) {
+ // Go will catch all the signals
+ signal.Ignore(syscall.SIGPIPE)
+ signalChan := make(chan os.Signal, 10)
+ signal.Notify(signalChan, syscall.SIGTERM, syscall.SIGINT, syscall.SIGHUP)
+ go func() {
+ for {
+ <-signalChan
+ go func() { _ = doUmount(mp, true) }()
+ go func() {
+ time.Sleep(time.Second * 3)
+ os.Exit(1)
+ }()
+ }
+ }()
+}
+
+func exposeMetrics(m meta.Meta, c *cli.Context) string {
+ var ip, port string
+ //default set
+ ip, port, err := net.SplitHostPort(c.String("metrics"))
+ if err != nil {
+ logger.Fatalf("metrics format error: %v", err)
+ }
+
+ meta.InitMetrics()
+ vfs.InitMetrics()
+ go metric.UpdateMetrics(m)
+ http.Handle("/metrics", promhttp.HandlerFor(
+ prometheus.DefaultGatherer,
+ promhttp.HandlerOpts{
+ // Opt into OpenMetrics to support exemplars.
+ EnableOpenMetrics: true,
+ },
+ ))
+ prometheus.MustRegister(prometheus.NewBuildInfoCollector())
+
+ // If not set metrics addr,the port will be auto set
+ if !c.IsSet("metrics") {
+ // If only set consul, ip will auto set
+ if c.IsSet("consul") {
+ ip, err = utils.GetLocalIp(c.String("consul"))
+ if err != nil {
+ logger.Errorf("Get local ip failed: %v", err)
+ return ""
+ }
+ }
+ }
+
+ ln, err := net.Listen("tcp", net.JoinHostPort(ip, port))
+ if err != nil {
+ // Don't try other ports on metrics set but listen failed
+ if c.IsSet("metrics") {
+ logger.Errorf("listen on %s:%s failed: %v", ip, port, err)
+ return ""
+ }
+ // Listen port on 0 will auto listen on a free port
+ ln, err = net.Listen("tcp", net.JoinHostPort(ip, "0"))
+ if err != nil {
+ logger.Errorf("Listen failed: %v", err)
+ return ""
+ }
+ }
+
+ go func() {
+ if err := http.Serve(ln, nil); err != nil {
+ logger.Errorf("Serve for metrics: %s", err)
+ }
+ }()
+
+ metricsAddr := ln.Addr().String()
+ logger.Infof("Prometheus metrics listening on %s", metricsAddr)
+ return metricsAddr
+}
+
+func wrapRegister(mp, name string) {
+ registry := prometheus.NewRegistry() // replace default so only JuiceFS metrics are exposed
+ prometheus.DefaultGatherer = registry
+ metricLabels := prometheus.Labels{"mp": mp, "vol_name": name}
+ prometheus.DefaultRegisterer = prometheus.WrapRegistererWithPrefix("juicefs_",
+ prometheus.WrapRegistererWith(metricLabels, registry))
+ prometheus.MustRegister(prometheus.NewProcessCollector(prometheus.ProcessCollectorOpts{}))
+ prometheus.MustRegister(prometheus.NewGoCollector())
+}
+
+func mount(c *cli.Context) error {
+ setLoggerLevel(c)
+ if c.Args().Len() < 1 {
+ logger.Fatalf("Meta URL and mountpoint are required")
+ }
+ addr := c.Args().Get(0)
+ if c.Args().Len() < 2 {
+ logger.Fatalf("MOUNTPOINT is required")
+ }
+ mp := c.Args().Get(1)
+ fi, err := os.Stat(mp)
+ if !strings.Contains(mp, ":") && err != nil {
+ if err := os.MkdirAll(mp, 0777); err != nil {
+ if os.IsExist(err) {
+ // a broken mount point, umount it
+ if err = doUmount(mp, true); err != nil {
+ logger.Fatalf("umount %s: %s", mp, err)
+ }
+ } else {
+ logger.Fatalf("create %s: %s", mp, err)
+ }
+ }
+ } else if err == nil && fi.Size() == 0 {
+ // a broken mount point, umount it
+ if err = doUmount(mp, true); err != nil {
+ logger.Fatalf("umount %s: %s", mp, err)
+ }
+ }
+ var readOnly = c.Bool("read-only")
+ for _, o := range strings.Split(c.String("o"), ",") {
+ if o == "ro" {
+ readOnly = true
+ }
+ }
+ metaConf := &meta.Config{
+ Retries: 10,
+ Strict: true,
+ CaseInsensi: strings.HasSuffix(mp, ":") && runtime.GOOS == "windows",
+ ReadOnly: readOnly,
+ OpenCache: time.Duration(c.Float64("open-cache") * 1e9),
+ MountPoint: mp,
+ Subdir: c.String("subdir"),
+ MaxDeletes: c.Int("max-deletes"),
+ }
+ m := meta.NewClient(addr, metaConf)
+ format, err := m.Load()
+ if err != nil {
+ logger.Fatalf("load setting: %s", err)
+ }
+
+ // Wrap the default registry, all prometheus.MustRegister() calls should be afterwards
+ wrapRegister(mp, format.Name)
+
+ if !c.Bool("writeback") && c.IsSet("upload-delay") {
+ logger.Warnf("delayed upload only work in writeback mode")
+ }
+
+ chunkConf := chunk.Config{
+ BlockSize: format.BlockSize * 1024,
+ Compress: format.Compression,
+
+ GetTimeout: time.Second * time.Duration(c.Int("get-timeout")),
+ PutTimeout: time.Second * time.Duration(c.Int("put-timeout")),
+ MaxUpload: c.Int("max-uploads"),
+ Writeback: c.Bool("writeback"),
+ UploadDelay: c.Duration("upload-delay"),
+ Prefetch: c.Int("prefetch"),
+ BufferSize: c.Int("buffer-size") << 20,
+ UploadLimit: c.Int64("upload-limit") * 1e6 / 8,
+ DownloadLimit: c.Int64("download-limit") * 1e6 / 8,
+
+ CacheDir: c.String("cache-dir"),
+ CacheSize: int64(c.Int("cache-size")),
+ FreeSpace: float32(c.Float64("free-space-ratio")),
+ CacheMode: os.FileMode(0600),
+ CacheFullBlock: !c.Bool("cache-partial-only"),
+ AutoCreate: true,
+ }
+
+ if chunkConf.CacheDir != "memory" {
+ ds := utils.SplitDir(chunkConf.CacheDir)
+ for i := range ds {
+ ds[i] = filepath.Join(ds[i], format.UUID)
+ }
+ chunkConf.CacheDir = strings.Join(ds, string(os.PathListSeparator))
+ }
+ if c.IsSet("bucket") {
+ format.Bucket = c.String("bucket")
+ }
+ blob, err := createStorage(format)
+ if err != nil {
+ logger.Fatalf("object storage: %s", err)
+ }
+ logger.Infof("Data use %s", blob)
+ store := chunk.NewCachedStore(blob, chunkConf)
+ m.OnMsg(meta.DeleteChunk, func(args ...interface{}) error {
+ chunkid := args[0].(uint64)
+ length := args[1].(uint32)
+ return store.Remove(chunkid, int(length))
+ })
+ m.OnMsg(meta.CompactChunk, func(args ...interface{}) error {
+ slices := args[0].([]meta.Slice)
+ chunkid := args[1].(uint64)
+ return vfs.Compact(chunkConf, store, slices, chunkid)
+ })
+ conf := &vfs.Config{
+ Meta: metaConf,
+ Format: format,
+ Version: version.Version(),
+ Mountpoint: mp,
+ Chunk: &chunkConf,
+ }
+
+ if c.Bool("background") && os.Getenv("JFS_FOREGROUND") == "" {
+ if runtime.GOOS != "windows" {
+ d := c.String("cache-dir")
+ if d != "memory" && !strings.HasPrefix(d, "/") {
+ ad, err := filepath.Abs(d)
+ if err != nil {
+ logger.Fatalf("cache-dir should be absolute path in daemon mode")
+ } else {
+ for i, a := range os.Args {
+ if a == d || a == "--cache-dir="+d {
+ os.Args[i] = a[:len(a)-len(d)] + ad
+ }
+ }
+ }
+ }
+ }
+ sqliteScheme := "sqlite3://"
+ if strings.HasPrefix(addr, sqliteScheme) {
+ path := addr[len(sqliteScheme):]
+ path2, err := filepath.Abs(path)
+ if err == nil && path2 != path {
+ for i, a := range os.Args {
+ if a == addr {
+ os.Args[i] = sqliteScheme + path2
+ }
+ }
+ }
+ }
+ // The default log to syslog is only in daemon mode.
+ utils.InitLoggers(!c.Bool("no-syslog"))
+ err := makeDaemon(c, conf.Format.Name, conf.Mountpoint, m)
+ if err != nil {
+ logger.Fatalf("Failed to make daemon: %s", err)
+ }
+ } else {
+ go checkMountpoint(conf.Format.Name, mp)
+ }
+
+ removePassword(addr)
+ err = m.NewSession()
+ if err != nil {
+ logger.Fatalf("new session: %s", err)
+ }
+ installHandler(mp)
+ v := vfs.NewVFS(conf, m, store)
+ metricsAddr := exposeMetrics(m, c)
+ if c.IsSet("consul") {
+ metric.RegisterToConsul(c.String("consul"), metricsAddr, mp)
+ }
+ if d := c.Duration("backup-meta"); d > 0 {
+ go vfs.Backup(m, blob, d)
+ }
+ if !c.Bool("no-usage-report") {
+ go usage.ReportUsage(m, version.Version())
+ }
+ mount_main(v, c)
+ return m.CloseSession()
+}
+
+func clientFlags() []cli.Flag {
+ var defaultCacheDir = "/var/jfsCache"
+ switch runtime.GOOS {
+ case "darwin":
+ fallthrough
+ case "windows":
+ homeDir, err := os.UserHomeDir()
+ if err != nil {
+ logger.Fatalf("%v", err)
+ return nil
+ }
+ defaultCacheDir = path.Join(homeDir, ".juicefs", "cache")
+ }
+ return []cli.Flag{
+ &cli.StringFlag{
+ Name: "bucket",
+ Usage: "customized endpoint to access object store",
+ },
+ &cli.IntFlag{
+ Name: "get-timeout",
+ Value: 60,
+ Usage: "the max number of seconds to download an object",
+ },
+ &cli.IntFlag{
+ Name: "put-timeout",
+ Value: 60,
+ Usage: "the max number of seconds to upload an object",
+ },
+ &cli.IntFlag{
+ Name: "io-retries",
+ Value: 30,
+ Usage: "number of retries after network failure",
+ },
+ &cli.IntFlag{
+ Name: "max-uploads",
+ Value: 20,
+ Usage: "number of connections to upload",
+ },
+ &cli.IntFlag{
+ Name: "max-deletes",
+ Value: 2,
+ Usage: "number of threads to delete objects",
+ },
+ &cli.IntFlag{
+ Name: "buffer-size",
+ Value: 300,
+ Usage: "total read/write buffering in MB",
+ },
+ &cli.Int64Flag{
+ Name: "upload-limit",
+ Value: 0,
+ Usage: "bandwidth limit for upload in Mbps",
+ },
+ &cli.Int64Flag{
+ Name: "download-limit",
+ Value: 0,
+ Usage: "bandwidth limit for download in Mbps",
+ },
+
+ &cli.IntFlag{
+ Name: "prefetch",
+ Value: 1,
+ Usage: "prefetch N blocks in parallel",
+ },
+ &cli.BoolFlag{
+ Name: "writeback",
+ Usage: "upload objects in background",
+ },
+ &cli.DurationFlag{
+ Name: "upload-delay",
+ Usage: "delayed duration for uploading objects (\"s\", \"m\", \"h\")",
+ },
+ &cli.StringFlag{
+ Name: "cache-dir",
+ Value: defaultCacheDir,
+ Usage: "directory paths of local cache, use colon to separate multiple paths",
+ },
+ &cli.IntFlag{
+ Name: "cache-size",
+ Value: 100 << 10,
+ Usage: "size of cached objects in MiB",
+ },
+ &cli.Float64Flag{
+ Name: "free-space-ratio",
+ Value: 0.1,
+ Usage: "min free space (ratio)",
+ },
+ &cli.BoolFlag{
+ Name: "cache-partial-only",
+ Usage: "cache only random/small read",
+ },
+ &cli.DurationFlag{
+ Name: "backup-meta",
+ Value: time.Hour,
+ Usage: "interval to automatically backup metadata in the object storage (0 means disable backup)",
+ },
+
+ &cli.BoolFlag{
+ Name: "read-only",
+ Usage: "allow lookup/read operations only",
+ },
+ &cli.Float64Flag{
+ Name: "open-cache",
+ Value: 0.0,
+ Usage: "open files cache timeout in seconds (0 means disable this feature)",
+ },
+ &cli.StringFlag{
+ Name: "subdir",
+ Usage: "mount a sub-directory as root",
+ },
+ }
+}
+
+func mountFlags() *cli.Command {
+ cmd := &cli.Command{
+ Name: "mount",
+ Usage: "mount a volume",
+ ArgsUsage: "META-URL MOUNTPOINT",
+ Action: mount,
+ Flags: []cli.Flag{
+ &cli.StringFlag{
+ Name: "metrics",
+ Value: "127.0.0.1:9567",
+ Usage: "address to export metrics",
+ },
+ &cli.StringFlag{
+ Name: "consul",
+ Value: "127.0.0.1:8500",
+ Usage: "consul address to register",
+ },
+ &cli.BoolFlag{
+ Name: "no-usage-report",
+ Usage: "do not send usage report",
+ },
+ },
+ }
+ cmd.Flags = append(cmd.Flags, mount_flags()...)
+ cmd.Flags = append(cmd.Flags, clientFlags()...)
+ return cmd
+}
diff --git a/cmd/mount_test.go b/cmd/mount_test.go
new file mode 100644
index 0000000..02df976
--- /dev/null
+++ b/cmd/mount_test.go
@@ -0,0 +1,127 @@
+/*
+ * JuiceFS, Copyright 2021 Juicedata, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package main
+
+import (
+ "context"
+ "fmt"
+ "io/ioutil"
+ "net/http"
+ "net/url"
+ "reflect"
+ "testing"
+ "time"
+
+ "github.com/prometheus/client_golang/prometheus"
+
+ "github.com/go-redis/redis/v8"
+
+ "github.com/juicedata/juicefs/pkg/meta"
+
+ "github.com/agiledragon/gomonkey/v2"
+
+ . "github.com/smartystreets/goconvey/convey"
+ "github.com/urfave/cli/v2"
+)
+
+func Test_exposeMetrics(t *testing.T) {
+ Convey("Test_exposeMetrics", t, func() {
+ Convey("Test_exposeMetrics", func() {
+ addr := "redis://127.0.0.1:6379/10"
+ var conf = meta.Config{MaxDeletes: 1}
+ client := meta.NewClient(addr, &conf)
+ var appCtx *cli.Context
+ stringPatches := gomonkey.ApplyMethod(reflect.TypeOf(appCtx), "String", func(_ *cli.Context, arg string) string {
+ switch arg {
+ case "metrics":
+ return "127.0.0.1:9567"
+ case "consul":
+ return "127.0.0.1:8500"
+ default:
+ return ""
+ }
+ })
+ isSetPatches := gomonkey.ApplyMethod(reflect.TypeOf(appCtx), "IsSet", func(_ *cli.Context, _ string) bool {
+ return false
+ })
+ defer stringPatches.Reset()
+ defer isSetPatches.Reset()
+ ResetPrometheus()
+ metricsAddr := exposeMetrics(client, appCtx)
+
+ u := url.URL{Scheme: "http", Host: metricsAddr, Path: "/metrics"}
+ resp, err := http.Get(u.String())
+ So(err, ShouldBeNil)
+ all, err := ioutil.ReadAll(resp.Body)
+ So(err, ShouldBeNil)
+ So(string(all), ShouldNotBeBlank)
+ })
+ })
+}
+
+func ResetPrometheus() {
+ http.DefaultServeMux = http.NewServeMux()
+ prometheus.DefaultRegisterer = prometheus.NewRegistry()
+}
+
+func MountTmp(metaUrl, mountpoint string) error {
+ ResetRedis(metaUrl)
+ formatArgs := []string{"", "format", "--storage", "file", "--bucket", "/tmp/testMountDir", metaUrl, "test"}
+ err := Main(formatArgs)
+ if err != nil {
+ return err
+ }
+ mountArgs := []string{"", "mount", metaUrl, mountpoint}
+
+ //Must be reset, otherwise panic will appear
+ ResetPrometheus()
+
+ go func() {
+ err := Main(mountArgs)
+ if err != nil {
+ fmt.Printf("mount failed: %v", err)
+ }
+ }()
+ time.Sleep(2 * time.Second)
+ return nil
+}
+func ResetRedis(metaUrl string) {
+ opt, _ := redis.ParseURL(metaUrl)
+ rdb := redis.NewClient(opt)
+ rdb.FlushDB(context.Background())
+}
+
+func TestMount(t *testing.T) {
+ metaUrl := "redis://127.0.0.1:6379/10"
+ mountpoint := "/tmp/testDir"
+ defer ResetRedis(metaUrl)
+ if err := MountTmp(metaUrl, mountpoint); err != nil {
+ t.Fatalf("mount failed: %v", err)
+ }
+ defer func(mountpoint string) {
+ err := UmountTmp(mountpoint)
+ if err != nil {
+ t.Fatalf("umount failed: %v", err)
+ }
+ }(mountpoint)
+
+ err := ioutil.WriteFile(fmt.Sprintf("%s/f1.txt", mountpoint), []byte("test"), 0644)
+ if err != nil {
+ t.Fatalf("Test mount failed: %v", err)
+ }
+
+}
diff --git a/cmd/mount_unix.go b/cmd/mount_unix.go
new file mode 100644
index 0000000..0370731
--- /dev/null
+++ b/cmd/mount_unix.go
@@ -0,0 +1,191 @@
+//go:build !windows
+// +build !windows
+
+/*
+ * JuiceFS, Copyright 2020 Juicedata, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package main
+
+import (
+ "bytes"
+ "io/ioutil"
+ "os"
+ "path"
+ "path/filepath"
+ "runtime"
+ "syscall"
+ "time"
+
+ "github.com/juicedata/godaemon"
+ "github.com/juicedata/juicefs/pkg/fuse"
+ "github.com/juicedata/juicefs/pkg/meta"
+ "github.com/juicedata/juicefs/pkg/vfs"
+ "github.com/urfave/cli/v2"
+)
+
+func checkMountpoint(name, mp string) {
+ for i := 0; i < 20; i++ {
+ time.Sleep(time.Millisecond * 500)
+ st, err := os.Stat(mp)
+ if err == nil {
+ if sys, ok := st.Sys().(*syscall.Stat_t); ok && sys.Ino == 1 {
+ logger.Infof("\033[92mOK\033[0m, %s is ready at %s", name, mp)
+ return
+ }
+ }
+ _, _ = os.Stdout.WriteString(".")
+ _ = os.Stdout.Sync()
+ }
+ _, _ = os.Stdout.WriteString("\n")
+ logger.Fatalf("fail to mount after 10 seconds, please check the log (/var/log/juicefs.log) or re-mount in foreground")
+}
+
+func makeDaemon(c *cli.Context, name, mp string, m meta.Meta) error {
+ var attrs godaemon.DaemonAttr
+ attrs.OnExit = func(stage int) error {
+ if stage != 0 {
+ return nil
+ }
+ checkMountpoint(name, mp)
+ return nil
+ }
+
+ // the current dir will be changed to root in daemon,
+ // so the mount point has to be an absolute path.
+ if godaemon.Stage() == 0 {
+ for i, a := range os.Args {
+ if a == mp {
+ amp, err := filepath.Abs(mp)
+ if err == nil {
+ os.Args[i] = amp
+ } else {
+ logger.Warnf("abs of %s: %s", mp, err)
+ }
+ }
+ }
+ var err error
+ logfile := c.String("log")
+ attrs.Stdout, err = os.OpenFile(logfile, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644)
+ if err != nil {
+ logger.Errorf("open log file %s: %s", logfile, err)
+ }
+ }
+ if godaemon.Stage() <= 1 {
+ err := m.Shutdown()
+ if err != nil {
+ logger.Errorf("shutdown: %s", err)
+ }
+ }
+ _, _, err := godaemon.MakeDaemon(&attrs)
+ return err
+}
+
+func mount_flags() []cli.Flag {
+ var defaultLogDir = "/var/log"
+ switch runtime.GOOS {
+ case "darwin":
+ homeDir, err := os.UserHomeDir()
+ if err != nil {
+ logger.Fatalf("%v", err)
+ return nil
+ }
+ defaultLogDir = path.Join(homeDir, ".juicefs")
+ }
+ return []cli.Flag{
+ &cli.BoolFlag{
+ Name: "d",
+ Aliases: []string{"background"},
+ Usage: "run in background",
+ },
+ &cli.BoolFlag{
+ Name: "no-syslog",
+ Usage: "disable syslog",
+ },
+ &cli.StringFlag{
+ Name: "log",
+ Value: path.Join(defaultLogDir, "juicefs.log"),
+ Usage: "path of log file when running in background",
+ },
+ &cli.StringFlag{
+ Name: "o",
+ Usage: "other FUSE options",
+ },
+ &cli.Float64Flag{
+ Name: "attr-cache",
+ Value: 1.0,
+ Usage: "attributes cache timeout in seconds",
+ },
+ &cli.Float64Flag{
+ Name: "entry-cache",
+ Value: 1.0,
+ Usage: "file entry cache timeout in seconds",
+ },
+ &cli.Float64Flag{
+ Name: "dir-entry-cache",
+ Value: 1.0,
+ Usage: "dir entry cache timeout in seconds",
+ },
+ &cli.BoolFlag{
+ Name: "enable-xattr",
+ Usage: "enable extended attributes (xattr)",
+ },
+ }
+}
+
+func disableUpdatedb() {
+ path := "/etc/updatedb.conf"
+ data, err := ioutil.ReadFile(path)
+ if err != nil {
+ return
+ }
+ fstype := "fuse.juicefs"
+ if bytes.Contains(data, []byte(fstype)) {
+ return
+ }
+ // assume that fuse.sshfs is already in PRUNEFS
+ knownFS := "fuse.sshfs"
+ p1 := bytes.Index(data, []byte("PRUNEFS"))
+ p2 := bytes.Index(data, []byte(knownFS))
+ if p1 > 0 && p2 > p1 {
+ var nd []byte
+ nd = append(nd, data[:p2]...)
+ nd = append(nd, fstype...)
+ nd = append(nd, ' ')
+ nd = append(nd, data[p2:]...)
+ err = ioutil.WriteFile(path, nd, 0644)
+ if err != nil {
+ logger.Warnf("update %s: %s", path, err)
+ } else {
+ logger.Infof("Add %s into PRUNEFS of %s", fstype, path)
+ }
+ }
+}
+
+func mount_main(v *vfs.VFS, c *cli.Context) {
+ if os.Getuid() == 0 && os.Getpid() != 1 {
+ disableUpdatedb()
+ }
+
+ conf := v.Conf
+ conf.AttrTimeout = time.Millisecond * time.Duration(c.Float64("attr-cache")*1000)
+ conf.EntryTimeout = time.Millisecond * time.Duration(c.Float64("entry-cache")*1000)
+ conf.DirEntryTimeout = time.Millisecond * time.Duration(c.Float64("dir-entry-cache")*1000)
+ logger.Infof("Mounting volume %s at %s ...", conf.Format.Name, conf.Mountpoint)
+ err := fuse.Serve(v, c.String("o"), c.Bool("enable-xattr"))
+ if err != nil {
+ logger.Fatalf("fuse: %s", err)
+ }
+}
diff --git a/cmd/mount_windows.go b/cmd/mount_windows.go
new file mode 100644
index 0000000..15672e1
--- /dev/null
+++ b/cmd/mount_windows.go
@@ -0,0 +1,58 @@
+/*
+ * JuiceFS, Copyright 2020 Juicedata, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package main
+
+import (
+ "github.com/juicedata/juicefs/pkg/meta"
+ "github.com/juicedata/juicefs/pkg/vfs"
+ "github.com/juicedata/juicefs/pkg/winfsp"
+ "github.com/urfave/cli/v2"
+)
+
+func mount_flags() []cli.Flag {
+ return []cli.Flag{
+ &cli.StringFlag{
+ Name: "o",
+ Usage: "other FUSE options",
+ },
+ &cli.BoolFlag{
+ Name: "as-root",
+ Usage: "Access files as administrator",
+ },
+ &cli.Float64Flag{
+ Name: "file-cache-to",
+ Value: 0.1,
+ Usage: "Cache file attributes in seconds",
+ },
+ &cli.Float64Flag{
+ Name: "delay-close",
+ Usage: "delay file closing in seconds.",
+ },
+ }
+}
+
+func makeDaemon(c *cli.Context, name, mp string, m meta.Meta) error {
+ logger.Warnf("Cannot run in background in Windows.")
+ return nil
+}
+
+func mount_main(v *vfs.VFS, c *cli.Context) {
+ winfsp.Serve(v, c.String("o"), c.Float64("file-cache-to"), c.Bool("as-root"), c.Int("delay-close"))
+}
+
+func checkMountpoint(name, mp string) {
+}
diff --git a/cmd/profile.go b/cmd/profile.go
new file mode 100644
index 0000000..a3a2087
--- /dev/null
+++ b/cmd/profile.go
@@ -0,0 +1,399 @@
+/*
+ * JuiceFS, Copyright 2021 Juicedata, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package main
+
+import (
+ "bufio"
+ "fmt"
+ "os"
+ "path"
+ "regexp"
+ "sort"
+ "strconv"
+ "strings"
+ "time"
+
+ "github.com/juicedata/juicefs/pkg/utils"
+ "github.com/mattn/go-isatty"
+ "github.com/urfave/cli/v2"
+)
+
+var findDigits = regexp.MustCompile(`\d+`)
+
+type profiler struct {
+ file *os.File
+ replay bool
+ tty bool
+ interval time.Duration
+ uids []string
+ gids []string
+ pids []string
+ entryChan chan *logEntry // one line
+ statsChan chan map[string]*stat
+ pause chan bool
+ /* --- for replay --- */
+ printTime chan time.Time
+ done chan bool
+}
+
+type stat struct {
+ count int
+ total int // total latency in 'us'
+}
+
+type keyStat struct {
+ key string
+ sPtr *stat
+}
+
+type logEntry struct {
+ ts time.Time
+ uid, gid, pid string
+ op string
+ latency int // us
+}
+
+func parseLine(line string) *logEntry {
+ if len(line) < 3 { // dummy line: "#"
+ return nil
+ }
+ fields := strings.Fields(line)
+ if len(fields) < 5 {
+ logger.Warnf("Log line is invalid: %s", line)
+ return nil
+ }
+ ts, err := time.Parse("2006.01.02 15:04:05.000000", strings.Join([]string{fields[0], fields[1]}, " "))
+ if err != nil {
+ logger.Warnf("Failed to parse log line: %s: %s", line, err)
+ return nil
+ }
+ ids := findDigits.FindAllString(fields[2], 3) // e.g: [uid:0,gid:0,pid:36674]
+ if len(ids) != 3 {
+ logger.Warnf("Log line is invalid: %s", line)
+ return nil
+ }
+ latStr := fields[len(fields)-1] // e.g: <0.000003>
+ latFloat, err := strconv.ParseFloat(latStr[1:len(latStr)-1], 64)
+ if err != nil {
+ logger.Warnf("Failed to parse log line: %s: %s", line, err)
+ return nil
+ }
+ return &logEntry{
+ ts: ts,
+ uid: ids[0],
+ gid: ids[1],
+ pid: ids[2],
+ op: fields[3],
+ latency: int(latFloat * 1000000.0),
+ }
+}
+
+func (p *profiler) reader() {
+ scanner := bufio.NewScanner(p.file)
+ for scanner.Scan() {
+ p.entryChan <- parseLine(scanner.Text())
+ }
+ if err := scanner.Err(); err != nil {
+ logger.Fatalf("Reading log file failed with error: %s", err)
+ }
+ close(p.entryChan)
+ if p.replay {
+ p.done <- true
+ }
+}
+
+func (p *profiler) isValid(entry *logEntry) bool {
+ valid := func(f []string, e string) bool {
+ if len(f) == 1 && f[0] == "" {
+ return true
+ }
+ for _, v := range f {
+ if v == e {
+ return true
+ }
+ }
+ return false
+ }
+ return valid(p.uids, entry.uid) && valid(p.gids, entry.gid) && valid(p.pids, entry.pid)
+}
+
+func (p *profiler) counter() {
+ var edge time.Time
+ stats := make(map[string]*stat)
+ for {
+ select {
+ case entry := <-p.entryChan:
+ if entry == nil {
+ break
+ }
+ if !p.isValid(entry) {
+ break
+ }
+ if p.replay {
+ if edge.IsZero() {
+ edge = entry.ts.Add(p.interval)
+ }
+ for ; entry.ts.After(edge); edge = edge.Add(p.interval) {
+ p.statsChan <- stats
+ p.printTime <- edge
+ stats = make(map[string]*stat)
+ }
+ }
+ value, ok := stats[entry.op]
+ if !ok {
+ value = &stat{}
+ stats[entry.op] = value
+ }
+ value.count++
+ value.total += entry.latency
+ case p.statsChan <- stats:
+ if p.replay {
+ p.printTime <- edge
+ edge = edge.Add(p.interval)
+ }
+ stats = make(map[string]*stat)
+ }
+ }
+}
+
+func (p *profiler) fastCounter() {
+ var start, last time.Time
+ stats := make(map[string]*stat)
+ for entry := range p.entryChan {
+ if entry == nil {
+ continue
+ }
+ if !p.isValid(entry) {
+ continue
+ }
+ if start.IsZero() {
+ start = entry.ts
+ }
+ last = entry.ts
+ value, ok := stats[entry.op]
+ if !ok {
+ value = &stat{}
+ stats[entry.op] = value
+ }
+ value.count++
+ value.total += entry.latency
+ }
+ p.statsChan <- stats
+ p.printTime <- start
+ p.printTime <- last
+}
+
+func printLines(lines []string, tty bool) {
+ if tty {
+ fmt.Print("\033[2J\033[1;1H") // clear screen
+ fmt.Printf("\033[92m%s\n\033[0m", lines[0])
+ fmt.Printf("\033[97m%s\n\033[0m", lines[1])
+ fmt.Printf("\033[94m%s\n\033[0m", lines[2])
+ if len(lines) > 3 {
+ for _, l := range lines[3:] {
+ fmt.Printf("\033[93m%s\n\033[0m", l)
+ }
+ }
+ } else {
+ fmt.Println(lines[0])
+ for _, l := range lines[2:] {
+ fmt.Println(l)
+ }
+ fmt.Println()
+ }
+}
+
+func (p *profiler) flush(timeStamp time.Time, keyStats []keyStat, done bool) {
+ var head string
+ if p.replay {
+ if done {
+ head = "(replay done)"
+ } else {
+ head = "(replaying)"
+ }
+ }
+ output := make([]string, 3)
+ output[0] = fmt.Sprintf("> JuiceFS Profiling %13s Refresh: %.0f seconds %20s",
+ head, p.interval.Seconds(), timeStamp.Format("2006-01-02T15:04:05"))
+ output[2] = fmt.Sprintf("%-14s %10s %15s %18s %14s", "Operation", "Count", "Average(us)", "Total(us)", "Percent(%)")
+ for _, s := range keyStats {
+ output = append(output, fmt.Sprintf("%-14s %10d %15.0f %18d %14.1f",
+ s.key, s.sPtr.count, float64(s.sPtr.total)/float64(s.sPtr.count), s.sPtr.total, float64(s.sPtr.total)/float64(p.interval.Microseconds())*100.0))
+ }
+ if p.replay {
+ output[1] = fmt.Sprintln("\n[enter]Pause/Continue")
+ }
+ printLines(output, p.tty)
+}
+
+func (p *profiler) flusher() {
+ var paused, done bool
+ ticker := time.NewTicker(p.interval)
+ ts := time.Now()
+ p.flush(ts, nil, false)
+ for {
+ select {
+ case t := <-ticker.C:
+ stats := <-p.statsChan
+ if paused { // ticker event might be passed long ago
+ paused = false
+ ticker.Stop()
+ ticker = time.NewTicker(p.interval)
+ t = time.Now()
+ }
+ if done {
+ ticker.Stop()
+ }
+ if p.replay {
+ ts = <-p.printTime
+ } else {
+ ts = t
+ }
+ keyStats := make([]keyStat, 0, len(stats))
+ for k, s := range stats {
+ keyStats = append(keyStats, keyStat{k, s})
+ }
+ sort.Slice(keyStats, func(i, j int) bool { // reversed
+ return keyStats[i].sPtr.total > keyStats[j].sPtr.total
+ })
+ p.flush(ts, keyStats, done)
+ if done {
+ os.Exit(0)
+ }
+ case paused = <-p.pause:
+ fmt.Printf("\n\033[97mPaused. Press [enter] to continue.\n\033[0m")
+ <-p.pause
+ case done = <-p.done:
+ }
+ }
+}
+
+func profile(ctx *cli.Context) error {
+ setLoggerLevel(ctx)
+ if ctx.Args().Len() < 1 {
+ logger.Fatalln("Mount point or log file must be provided!")
+ }
+ logPath := ctx.Args().First()
+ st, err := os.Stat(logPath)
+ if err != nil {
+ logger.Fatalf("Failed to stat path %s: %s", logPath, err)
+ }
+ var replay bool
+ if st.IsDir() { // mount point
+ inode, err := utils.GetFileInode(logPath)
+ if err != nil {
+ logger.Fatalf("Failed to lookup inode for %s: %s", logPath, err)
+ }
+ if inode != 1 {
+ logger.Fatalf("Path %s is not a mount point!", logPath)
+ }
+ logPath = path.Join(logPath, ".accesslog")
+ } else { // log file to be replayed
+ replay = true
+ }
+ nodelay := ctx.Int64("interval") == 0
+ if nodelay && !replay {
+ logger.Fatalf("Interval must be > 0 for real time mode!")
+ }
+ file, err := os.Open(logPath)
+ if err != nil {
+ logger.Fatalf("Failed to open log file %s: %s", logPath, err)
+ }
+ defer file.Close()
+
+ prof := profiler{
+ file: file,
+ replay: replay,
+ tty: isatty.IsTerminal(os.Stdout.Fd()),
+ interval: time.Second * time.Duration(ctx.Int64("interval")),
+ uids: strings.Split(ctx.String("uid"), ","),
+ gids: strings.Split(ctx.String("gid"), ","),
+ pids: strings.Split(ctx.String("pid"), ","),
+ entryChan: make(chan *logEntry, 16),
+ statsChan: make(chan map[string]*stat),
+ pause: make(chan bool),
+ }
+ if prof.replay {
+ prof.printTime = make(chan time.Time)
+ prof.done = make(chan bool)
+ }
+
+ go prof.reader()
+ if nodelay {
+ go prof.fastCounter()
+ stats := <-prof.statsChan
+ start := <-prof.printTime
+ last := <-prof.printTime
+ keyStats := make([]keyStat, 0, len(stats))
+ for k, s := range stats {
+ keyStats = append(keyStats, keyStat{k, s})
+ }
+ sort.Slice(keyStats, func(i, j int) bool { // reversed
+ return keyStats[i].sPtr.total > keyStats[j].sPtr.total
+ })
+ prof.replay = false
+ prof.interval = last.Sub(start)
+ prof.flush(last, keyStats, <-prof.done)
+ return nil
+ }
+
+ go prof.counter()
+ go prof.flusher()
+ var input string
+ for {
+ if _, err = fmt.Scanln(&input); err != nil {
+ logger.Fatalf("Failed to scan input: %s", err)
+ }
+ if prof.tty {
+ fmt.Print("\033[1A\033[K") // move cursor back
+ }
+ if prof.replay {
+ prof.pause <- true // pause/continue
+ }
+ }
+}
+
+func profileFlags() *cli.Command {
+ return &cli.Command{
+ Name: "profile",
+ Usage: "analyze access log",
+ Action: profile,
+ ArgsUsage: "MOUNTPOINT/LOGFILE",
+ Flags: []cli.Flag{
+ &cli.StringFlag{
+ Name: "uid",
+ Aliases: []string{"u"},
+ Usage: "track only specified UIDs(separated by comma ,)",
+ },
+ &cli.StringFlag{
+ Name: "gid",
+ Aliases: []string{"g"},
+ Usage: "track only specified GIDs(separated by comma ,)",
+ },
+ &cli.StringFlag{
+ Name: "pid",
+ Aliases: []string{"p"},
+ Usage: "track only specified PIDs(separated by comma ,)",
+ },
+ &cli.Int64Flag{
+ Name: "interval",
+ Value: 2,
+ Usage: "flush interval in seconds; set it to 0 when replaying a log file to get an immediate result",
+ },
+ },
+ }
+}
diff --git a/cmd/rmr.go b/cmd/rmr.go
new file mode 100644
index 0000000..cddb3b7
--- /dev/null
+++ b/cmd/rmr.go
@@ -0,0 +1,105 @@
+/*
+ * JuiceFS, Copyright 2020 Juicedata, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package main
+
+import (
+ "fmt"
+ "os"
+ "path/filepath"
+ "runtime"
+ "syscall"
+
+ "github.com/juicedata/juicefs/pkg/meta"
+ "github.com/juicedata/juicefs/pkg/utils"
+ "github.com/pkg/errors"
+ "github.com/urfave/cli/v2"
+)
+
+func rmrFlags() *cli.Command {
+ return &cli.Command{
+ Name: "rmr",
+ Usage: "remove directories recursively",
+ ArgsUsage: "PATH ...",
+ Action: rmr,
+ }
+}
+
+func openController(path string) *os.File {
+ f, err := os.OpenFile(filepath.Join(path, ".control"), os.O_RDWR, 0)
+ if err != nil && !os.IsNotExist(err) && !errors.Is(err, syscall.ENOTDIR) {
+ logger.Errorf("%s", err)
+ return nil
+ }
+ if err != nil && path != "/" {
+ return openController(filepath.Dir(path))
+ }
+ return f
+}
+
+func rmr(ctx *cli.Context) error {
+ if runtime.GOOS == "windows" {
+ logger.Infof("Windows is not supported")
+ return nil
+ }
+ if ctx.Args().Len() < 1 {
+ logger.Infof("PATH is needed")
+ return nil
+ }
+ for i := 0; i < ctx.Args().Len(); i++ {
+ path := ctx.Args().Get(i)
+ p, err := filepath.Abs(path)
+ if err != nil {
+ logger.Errorf("abs of %s: %s", path, err)
+ continue
+ }
+ d := filepath.Dir(p)
+ name := filepath.Base(p)
+ inode, err := utils.GetFileInode(d)
+ if err != nil {
+ return fmt.Errorf("lookup inode for %s: %s", d, err)
+ }
+ f := openController(d)
+ if f == nil {
+ logger.Errorf("%s is not inside JuiceFS", path)
+ continue
+ }
+ wb := utils.NewBuffer(8 + 8 + 1 + uint32(len(name)))
+ wb.Put32(meta.Rmr)
+ wb.Put32(8 + 1 + uint32(len(name)))
+ wb.Put64(inode)
+ wb.Put8(uint8(len(name)))
+ wb.Put([]byte(name))
+ _, err = f.Write(wb.Bytes())
+ if err != nil {
+ logger.Fatalf("write message: %s", err)
+ }
+ var errs = make([]byte, 1)
+ n, err := f.Read(errs)
+ if err != nil || n != 1 {
+ logger.Fatalf("read message: %d %s", n, err)
+ }
+ if errs[0] != 0 {
+ errno := syscall.Errno(errs[0])
+ if runtime.GOOS == "windows" {
+ errno += 0x20000000
+ }
+ logger.Fatalf("RMR %s: %s", path, errno)
+ }
+ _ = f.Close()
+ }
+ return nil
+}
diff --git a/cmd/rmr_test.go b/cmd/rmr_test.go
new file mode 100644
index 0000000..2d226be
--- /dev/null
+++ b/cmd/rmr_test.go
@@ -0,0 +1,65 @@
+/*
+ * JuiceFS, Copyright 2021 Juicedata, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package main
+
+import (
+ "fmt"
+ "io/ioutil"
+ "os"
+ "testing"
+)
+
+func TestRmr(t *testing.T) {
+ metaUrl := "redis://127.0.0.1:6379/10"
+ mountpoint := "/tmp/testDir"
+ defer ResetRedis(metaUrl)
+ if err := MountTmp(metaUrl, mountpoint); err != nil {
+ t.Fatalf("mount failed: %v", err)
+ }
+ defer func(mountpoint string) {
+ err := UmountTmp(mountpoint)
+ if err != nil {
+ t.Fatalf("umount failed: %v", err)
+ }
+ }(mountpoint)
+
+ paths := []string{"/dir1", "/dir2", "/dir3/dir2"}
+ for _, path := range paths {
+ if err := os.MkdirAll(fmt.Sprintf("%s%s/dir2/dir3/dir4/dir5", mountpoint, path), 0777); err != nil {
+ t.Fatalf("Test mount err %v", err)
+ }
+ }
+ for i := 0; i < 5; i++ {
+ filename := fmt.Sprintf("%s/dir1/f%d.txt", mountpoint, i)
+ err := ioutil.WriteFile(filename, []byte("test"), 0644)
+ if err != nil {
+ t.Fatalf("Test mount failed : %v", err)
+ }
+ }
+
+ rmrArgs := []string{"", "rmr", mountpoint + paths[0], mountpoint + paths[1], mountpoint + paths[2]}
+ if err := Main(rmrArgs); err != nil {
+ t.Fatalf("rmr failed : %v", err)
+ }
+
+ for _, path := range paths {
+ dir, err := os.ReadDir(mountpoint + path)
+ if len(dir) != 0 {
+ t.Fatalf("test rmr error: %v", err)
+ }
+ }
+}
diff --git a/cmd/stats.go b/cmd/stats.go
new file mode 100644
index 0000000..2e092d1
--- /dev/null
+++ b/cmd/stats.go
@@ -0,0 +1,400 @@
+/*
+ * JuiceFS, Copyright 2021 Juicedata, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package main
+
+import (
+ "fmt"
+ "io/ioutil"
+ "os"
+ "path"
+ "strconv"
+ "strings"
+ "time"
+
+ "github.com/juicedata/juicefs/pkg/utils"
+ "github.com/mattn/go-isatty"
+ "github.com/urfave/cli/v2"
+)
+
+const (
+ BLACK = 30 + iota
+ RED
+ GREEN
+ YELLOW
+ BLUE
+ MAGENTA
+ CYAN
+ WHITE
+)
+
+const (
+ RESET_SEQ = "\033[0m"
+ COLOR_SEQ = "\033[1;" // %dm
+ COLOR_DARK_SEQ = "\033[0;" // %dm
+ UNDERLINE_SEQ = "\033[4m"
+ // BOLD_SEQ = "\033[1m"
+)
+
+type statsWatcher struct {
+ tty bool
+ interval uint
+ path string
+ header string
+ sections []*section
+}
+
+func (w *statsWatcher) colorize(msg string, color int, dark bool, underline bool) string {
+ if !w.tty || msg == "" || msg == " " {
+ return msg
+ }
+ var cseq, useq string
+ if dark {
+ cseq = COLOR_DARK_SEQ
+ } else {
+ cseq = COLOR_SEQ
+ }
+ if underline {
+ useq = UNDERLINE_SEQ
+ }
+ return fmt.Sprintf("%s%s%dm%s%s", useq, cseq, color, msg, RESET_SEQ)
+}
+
+const (
+ metricByte = 1 << iota
+ metricCount
+ metricTime
+ metricCPU
+ metricGauge
+ metricCounter
+ metricHist
+)
+
+type item struct {
+ nick string // must be size <= 5
+ name string
+ typ uint8
+}
+
+type section struct {
+ name string
+ items []*item
+}
+
+func (w *statsWatcher) buildSchema(schema string, verbosity uint) {
+ for _, r := range schema {
+ var s section
+ switch r {
+ case 'u':
+ s.name = "usage"
+ s.items = append(s.items, &item{"cpu", "juicefs_cpu_usage", metricCPU | metricCounter})
+ s.items = append(s.items, &item{"mem", "juicefs_memory", metricGauge})
+ s.items = append(s.items, &item{"buf", "juicefs_used_buffer_size_bytes", metricGauge})
+ if verbosity > 0 {
+ s.items = append(s.items, &item{"cache", "juicefs_store_cache_size_bytes", metricGauge})
+ }
+ case 'f':
+ s.name = "fuse"
+ s.items = append(s.items, &item{"ops", "juicefs_fuse_ops_durations_histogram_seconds", metricTime | metricHist})
+ s.items = append(s.items, &item{"read", "juicefs_fuse_read_size_bytes_sum", metricByte | metricCounter})
+ s.items = append(s.items, &item{"write", "juicefs_fuse_written_size_bytes_sum", metricByte | metricCounter})
+ case 'm':
+ s.name = "meta"
+ s.items = append(s.items, &item{"ops", "juicefs_meta_ops_durations_histogram_seconds", metricTime | metricHist})
+ if verbosity > 0 {
+ s.items = append(s.items, &item{"txn", "juicefs_transaction_durations_histogram_seconds", metricTime | metricHist})
+ s.items = append(s.items, &item{"retry", "juicefs_transaction_restart", metricCount | metricCounter})
+ }
+ case 'c':
+ s.name = "blockcache"
+ s.items = append(s.items, &item{"read", "juicefs_blockcache_hit_bytes", metricByte | metricCounter})
+ s.items = append(s.items, &item{"write", "juicefs_blockcache_write_bytes", metricByte | metricCounter})
+ case 'o':
+ s.name = "object"
+ s.items = append(s.items, &item{"get", "juicefs_object_request_data_bytes_GET", metricByte | metricCounter})
+ if verbosity > 0 {
+ s.items = append(s.items, &item{"get_c", "juicefs_object_request_durations_histogram_seconds_GET", metricTime | metricHist})
+ }
+ s.items = append(s.items, &item{"put", "juicefs_object_request_data_bytes_PUT", metricByte | metricCounter})
+ if verbosity > 0 {
+ s.items = append(s.items, &item{"put_c", "juicefs_object_request_durations_histogram_seconds_PUT", metricTime | metricHist})
+ s.items = append(s.items, &item{"del_c", "juicefs_object_request_durations_histogram_seconds_DELETE", metricTime | metricHist})
+ }
+ case 'g':
+ s.name = "go"
+ s.items = append(s.items, &item{"alloc", "juicefs_go_memstats_alloc_bytes", metricGauge})
+ s.items = append(s.items, &item{"sys", "juicefs_go_memstats_sys_bytes", metricGauge})
+ default:
+ fmt.Printf("Warning: no item defined for %c\n", r)
+ continue
+ }
+ w.sections = append(w.sections, &s)
+ }
+ if len(w.sections) == 0 {
+ logger.Fatalln("no section to watch, please check the schema string")
+ }
+}
+
+func padding(name string, width int, char byte) string {
+ pad := width - len(name)
+ if pad < 0 {
+ pad = 0
+ name = name[0:width]
+ }
+ prefix := (pad + 1) / 2
+ buf := make([]byte, width)
+ for i := 0; i < prefix; i++ {
+ buf[i] = char
+ }
+ copy(buf[prefix:], name)
+ for i := prefix + len(name); i < width; i++ {
+ buf[i] = char
+ }
+ return string(buf)
+}
+
+func (w *statsWatcher) formatHeader() {
+ headers := make([]string, len(w.sections))
+ subHeaders := make([]string, len(w.sections))
+ for i, s := range w.sections {
+ subs := make([]string, 0, len(s.items))
+ for _, it := range s.items {
+ subs = append(subs, w.colorize(padding(it.nick, 5, ' '), BLUE, false, true))
+ if it.typ&metricHist != 0 {
+ if it.typ&metricTime != 0 {
+ subs = append(subs, w.colorize(" lat ", BLUE, false, true))
+ } else {
+ subs = append(subs, w.colorize(" avg ", BLUE, false, true))
+ }
+ }
+ }
+ width := 6*len(subs) - 1 // nick(5) + space(1)
+ subHeaders[i] = strings.Join(subs, " ")
+ headers[i] = w.colorize(padding(s.name, width, '-'), BLUE, true, false)
+ }
+ w.header = fmt.Sprintf("%s\n%s", strings.Join(headers, " "),
+ strings.Join(subHeaders, w.colorize("|", BLUE, true, false)))
+}
+
+func (w *statsWatcher) formatU64(v float64, dark, isByte bool) string {
+ if v <= 0.0 {
+ return w.colorize(" 0 ", BLACK, false, false)
+ }
+ var vi uint64
+ var unit string
+ var color int
+ switch vi = uint64(v); {
+ case vi < 10000:
+ if isByte {
+ unit = "B"
+ } else {
+ unit = " "
+ }
+ color = RED
+ case vi>>10 < 10000:
+ vi, unit, color = vi>>10, "K", YELLOW
+ case vi>>20 < 10000:
+ vi, unit, color = vi>>20, "M", GREEN
+ case vi>>30 < 10000:
+ vi, unit, color = vi>>30, "G", BLUE
+ case vi>>40 < 10000:
+ vi, unit, color = vi>>40, "T", MAGENTA
+ default:
+ vi, unit, color = vi>>50, "P", CYAN
+ }
+ return w.colorize(fmt.Sprintf("%4d", vi), color, dark, false) +
+ w.colorize(unit, BLACK, false, false)
+}
+
+func (w *statsWatcher) formatTime(v float64, dark bool) string {
+ var ret string
+ var color int
+ switch {
+ case v <= 0.0:
+ ret, color, dark = " 0 ", BLACK, false
+ case v < 10.0:
+ ret, color = fmt.Sprintf("%4.2f ", v), GREEN
+ case v < 100.0:
+ ret, color = fmt.Sprintf("%4.1f ", v), YELLOW
+ case v < 10000.0:
+ ret, color = fmt.Sprintf("%4.f ", v), RED
+ default:
+ ret, color = fmt.Sprintf("%1.e", v), MAGENTA
+ }
+ return w.colorize(ret, color, dark, false)
+}
+
+func (w *statsWatcher) formatCPU(v float64, dark bool) string {
+ var ret string
+ var color int
+ switch v = v * 100.0; {
+ case v <= 0.0:
+ ret, color = " 0.0", WHITE
+ case v < 30.0:
+ ret, color = fmt.Sprintf("%4.1f", v), GREEN
+ case v < 100.0:
+ ret, color = fmt.Sprintf("%4.1f", v), YELLOW
+ default:
+ ret, color = fmt.Sprintf("%4.f", v), RED
+ }
+ return w.colorize(ret, color, dark, false) +
+ w.colorize("%", BLACK, false, false)
+}
+
+func (w *statsWatcher) printDiff(left, right map[string]float64, dark bool) {
+ if !w.tty && dark {
+ return
+ }
+ values := make([]string, len(w.sections))
+ for i, s := range w.sections {
+ vals := make([]string, 0, len(s.items))
+ for _, it := range s.items {
+ switch it.typ & 0xF0 {
+ case metricGauge: // currently must be metricByte
+ vals = append(vals, w.formatU64(right[it.name], dark, true))
+ case metricCounter:
+ v := (right[it.name] - left[it.name])
+ if !dark {
+ v /= float64(w.interval)
+ }
+ if it.typ&metricByte != 0 {
+ vals = append(vals, w.formatU64(v, dark, true))
+ } else if it.typ&metricCPU != 0 {
+ vals = append(vals, w.formatCPU(v, dark))
+ } else { // metricCount
+ vals = append(vals, w.formatU64(v, dark, false))
+ }
+ case metricHist: // metricTime
+ count := right[it.name+"_total"] - left[it.name+"_total"]
+ var avg float64
+ if count > 0.0 {
+ cost := right[it.name+"_sum"] - left[it.name+"_sum"]
+ if it.typ&metricTime != 0 {
+ cost *= 1000 // s -> ms
+ }
+ avg = cost / count
+ }
+ if !dark {
+ count /= float64(w.interval)
+ }
+ vals = append(vals, w.formatU64(count, dark, false), w.formatTime(avg, dark))
+ }
+ }
+ values[i] = strings.Join(vals, " ")
+ }
+ if w.tty && dark {
+ fmt.Printf("%s\r", strings.Join(values, w.colorize("|", BLUE, true, false)))
+ } else {
+ fmt.Printf("%s\n", strings.Join(values, w.colorize("|", BLUE, true, false)))
+ }
+}
+
+func readStats(path string) map[string]float64 {
+ f, err := os.Open(path)
+ if err != nil {
+ logger.Warnf("open %s: %s", path, err)
+ return nil
+ }
+ defer f.Close()
+ d, err := ioutil.ReadAll(f)
+ if err != nil {
+ logger.Warnf("read %s: %s", path, err)
+ return nil
+ }
+ stats := make(map[string]float64)
+ lines := strings.Split(string(d), "\n")
+ for _, line := range lines {
+ fields := strings.Fields(line)
+ if len(fields) == 2 {
+ stats[fields[0]], err = strconv.ParseFloat(fields[1], 64)
+ if err != nil {
+ logger.Warnf("parse %s: %s", fields[1], err)
+ }
+ }
+ }
+ return stats
+}
+
+func stats(ctx *cli.Context) error {
+ setLoggerLevel(ctx)
+ if ctx.Args().Len() < 1 {
+ logger.Fatalln("mount point must be provided")
+ }
+ mp := ctx.Args().First()
+ inode, err := utils.GetFileInode(mp)
+ if err != nil {
+ logger.Fatalf("lookup inode for %s: %s", mp, err)
+ }
+ if inode != 1 {
+ logger.Fatalf("path %s is not a mount point", mp)
+ }
+
+ watcher := &statsWatcher{
+ tty: !ctx.Bool("no-color") && isatty.IsTerminal(os.Stdout.Fd()),
+ interval: ctx.Uint("interval"),
+ path: path.Join(mp, ".stats"),
+ }
+ watcher.buildSchema(ctx.String("schema"), ctx.Uint("verbosity"))
+ watcher.formatHeader()
+
+ var tick uint
+ var start, last, current map[string]float64
+ ticker := time.NewTicker(time.Second)
+ defer ticker.Stop()
+ current = readStats(watcher.path)
+ start = current
+ last = current
+ for {
+ if tick%(watcher.interval*30) == 0 {
+ fmt.Println(watcher.header)
+ }
+ if tick%watcher.interval == 0 {
+ watcher.printDiff(start, current, false)
+ start = current
+ } else {
+ watcher.printDiff(last, current, true)
+ }
+ last = current
+ tick++
+ <-ticker.C
+ current = readStats(watcher.path)
+ }
+}
+
+func statsFlags() *cli.Command {
+ return &cli.Command{
+ Name: "stats",
+ Usage: "show runtime statistics",
+ Action: stats,
+ ArgsUsage: "MOUNTPOINT",
+ Flags: []cli.Flag{
+ &cli.StringFlag{
+ Name: "schema",
+ Value: "ufmco",
+ Usage: "schema string that controls the output sections (u: usage, f: fuse, m: meta, c: blockcache, o: object, g: go)",
+ },
+ &cli.UintFlag{
+ Name: "interval",
+ Value: 1,
+ Usage: "interval in seconds between each update",
+ },
+ &cli.UintFlag{
+ Name: "verbosity",
+ Usage: "verbosity level, 0 or 1 is enough for most cases",
+ },
+ },
+ }
+}
diff --git a/cmd/status.go b/cmd/status.go
new file mode 100644
index 0000000..b249af3
--- /dev/null
+++ b/cmd/status.go
@@ -0,0 +1,86 @@
+/*
+ * JuiceFS, Copyright 2021 Juicedata, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package main
+
+import (
+ "encoding/json"
+ "fmt"
+
+ "github.com/juicedata/juicefs/pkg/meta"
+ "github.com/urfave/cli/v2"
+)
+
+type sections struct {
+ Setting *meta.Format
+ Sessions []*meta.Session
+}
+
+func printJson(v interface{}) {
+ output, err := json.MarshalIndent(v, "", " ")
+ if err != nil {
+ logger.Fatalf("json: %s", err)
+ }
+ fmt.Println(string(output))
+}
+
+func status(ctx *cli.Context) error {
+ setLoggerLevel(ctx)
+ if ctx.Args().Len() < 1 {
+ return fmt.Errorf("META-URL is needed")
+ }
+ removePassword(ctx.Args().Get(0))
+ m := meta.NewClient(ctx.Args().Get(0), &meta.Config{Retries: 10, Strict: true})
+
+ if sid := ctx.Uint64("session"); sid != 0 {
+ s, err := m.GetSession(sid)
+ if err != nil {
+ logger.Fatalf("get session: %s", err)
+ }
+ printJson(s)
+ return nil
+ }
+
+ format, err := m.Load()
+ if err != nil {
+ logger.Fatalf("load setting: %s", err)
+ }
+ format.RemoveSecret()
+
+ sessions, err := m.ListSessions()
+ if err != nil {
+ logger.Fatalf("list sessions: %s", err)
+ }
+
+ printJson(§ions{format, sessions})
+ return nil
+}
+
+func statusFlags() *cli.Command {
+ return &cli.Command{
+ Name: "status",
+ Usage: "show status of JuiceFS",
+ ArgsUsage: "META-URL",
+ Action: status,
+ Flags: []cli.Flag{
+ &cli.Uint64Flag{
+ Name: "session",
+ Aliases: []string{"s"},
+ Usage: "show detailed information (sustained inodes, locks) of the specified session (sid)",
+ },
+ },
+ }
+}
diff --git a/cmd/status_test.go b/cmd/status_test.go
new file mode 100644
index 0000000..1315209
--- /dev/null
+++ b/cmd/status_test.go
@@ -0,0 +1,78 @@
+/*
+ * JuiceFS, Copyright 2021 Juicedata, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package main
+
+import (
+ "encoding/json"
+ "io/ioutil"
+ "os"
+ "testing"
+
+ "github.com/agiledragon/gomonkey/v2"
+
+ . "github.com/smartystreets/goconvey/convey"
+)
+
+func TestStatus(t *testing.T) {
+ Convey("TestInfo", t, func() {
+ Convey("TestInfo", func() {
+ tmpFile, err := os.CreateTemp("/tmp", "")
+ if err != nil {
+ t.Fatalf("creat tmp file failed: %v", err)
+ }
+ defer tmpFile.Close()
+ defer os.Remove(tmpFile.Name())
+ if err != nil {
+ t.Fatalf("create temporary file: %v", err)
+ }
+ // mock os.Stdout
+ patches := gomonkey.ApplyGlobalVar(os.Stdout, *tmpFile)
+ defer patches.Reset()
+ metaUrl := "redis://localhost:6379/10"
+ mountpoint := "/tmp/testDir"
+ statusArgs := []string{"", "status", metaUrl}
+
+ defer ResetRedis(metaUrl)
+ if err := MountTmp(metaUrl, mountpoint); err != nil {
+ t.Fatalf("mount failed: %v", err)
+ }
+ defer func() {
+ err := UmountTmp(mountpoint)
+ if err != nil {
+ t.Fatalf("umount failed: %v", err)
+ }
+ }()
+
+ if err := Main(statusArgs); err != nil {
+ t.Fatalf("test status failed: %v", err)
+ }
+
+ content, err := ioutil.ReadFile(tmpFile.Name())
+ if err != nil {
+ t.Fatalf("readFile failed: %v", err)
+ }
+
+ s := sections{}
+ if err = json.Unmarshal(content, &s); err != nil {
+ t.Fatalf("test status failed: %v", err)
+ }
+ if s.Setting.Name != "test" || s.Setting.Storage != "file" {
+ t.Fatalf("test status failed: %v", err)
+ }
+ })
+ })
+}
diff --git a/cmd/sync.go b/cmd/sync.go
new file mode 100644
index 0000000..8d8b5be
--- /dev/null
+++ b/cmd/sync.go
@@ -0,0 +1,298 @@
+/*
+ * JuiceFS, Copyright 2018 Juicedata, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package main
+
+import (
+ "fmt"
+ "net"
+ "net/http"
+ _ "net/http/pprof"
+ "net/url"
+ "os"
+ "path/filepath"
+ "regexp"
+ "runtime"
+ "strings"
+ "syscall"
+
+ "github.com/juicedata/juicefs/pkg/object"
+ "github.com/juicedata/juicefs/pkg/sync"
+ "github.com/urfave/cli/v2"
+ "golang.org/x/term"
+)
+
+func supportHTTPS(name, endpoint string) bool {
+ switch name {
+ case "ufile":
+ return !(strings.Contains(endpoint, ".internal-") || strings.HasSuffix(endpoint, ".ucloud.cn"))
+ case "oss":
+ return !(strings.Contains(endpoint, ".vpc100-oss") || strings.Contains(endpoint, "internal.aliyuncs.com"))
+ case "jss":
+ return false
+ case "s3":
+ ps := strings.SplitN(strings.Split(endpoint, ":")[0], ".", 2)
+ if len(ps) > 1 && net.ParseIP(ps[1]) != nil {
+ return false
+ }
+ case "minio":
+ return false
+ }
+ return true
+}
+
+// Check if uri is local file path
+func isFilePath(uri string) bool {
+ // check drive pattern when running on Windows
+ if runtime.GOOS == "windows" &&
+ len(uri) > 1 && (('a' <= uri[0] && uri[0] <= 'z') ||
+ ('A' <= uri[0] && uri[0] <= 'Z')) && uri[1] == ':' {
+ return true
+ }
+ return !strings.Contains(uri, ":")
+}
+
+func createSyncStorage(uri string, conf *sync.Config) (object.ObjectStorage, error) {
+ if !strings.Contains(uri, "://") {
+ if isFilePath(uri) {
+ absPath, err := filepath.Abs(uri)
+ if err != nil {
+ logger.Fatalf("invalid path: %s", err.Error())
+ }
+ if !strings.HasPrefix(absPath, "/") { // Windows path
+ absPath = "/" + strings.Replace(absPath, "\\", "/", -1)
+ }
+ if strings.HasSuffix(uri, "/") {
+ absPath += "/"
+ }
+
+ // Windows: file:///C:/a/b/c, Unix: file:///a/b/c
+ uri = "file://" + absPath
+ } else { // sftp
+ var user string
+ if strings.Contains(uri, "@") {
+ parts := strings.Split(uri, "@")
+ user = parts[0]
+ uri = parts[1]
+ }
+ var pass string
+ if strings.Contains(user, ":") {
+ parts := strings.Split(user, ":")
+ user = parts[0]
+ pass = parts[1]
+ } else if os.Getenv("SSH_PRIVATE_KEY_PATH") == "" {
+ fmt.Print("Enter Password: ")
+ bytePassword, err := term.ReadPassword(int(syscall.Stdin))
+ if err != nil {
+ logger.Fatalf("Read password: %s", err.Error())
+ }
+ pass = string(bytePassword)
+ }
+ return object.CreateStorage("sftp", uri, user, pass)
+ }
+ }
+ u, err := url.Parse(uri)
+ if err != nil {
+ logger.Fatalf("Can't parse %s: %s", uri, err.Error())
+ }
+ user := u.User
+ var accessKey, secretKey string
+ if user != nil {
+ accessKey = user.Username()
+ secretKey, _ = user.Password()
+ }
+ name := strings.ToLower(u.Scheme)
+ endpoint := u.Host
+
+ isS3PathTypeUrl := isS3PathType(endpoint)
+
+ if name == "file" {
+ endpoint = u.Path
+ } else if name == "hdfs" {
+ } else if !conf.NoHTTPS && supportHTTPS(name, endpoint) {
+ endpoint = "https://" + endpoint
+ } else {
+ endpoint = "http://" + endpoint
+ }
+ if name == "minio" || name == "s3" && isS3PathTypeUrl {
+ // bucket name is part of path
+ endpoint += u.Path
+ }
+
+ store, err := object.CreateStorage(name, endpoint, accessKey, secretKey)
+ if err != nil {
+ return nil, fmt.Errorf("create %s %s: %s", name, endpoint, err)
+ }
+ if conf.Perms {
+ if _, ok := store.(object.FileSystem); !ok {
+ logger.Warnf("%s is not a file system, can not preserve permissions", store)
+ conf.Perms = false
+ }
+ }
+ switch name {
+ case "file":
+ case "minio":
+ if strings.Count(u.Path, "/") > 1 {
+ // skip bucket name
+ store = object.WithPrefix(store, strings.SplitN(u.Path[1:], "/", 2)[1])
+ }
+ case "s3":
+ if isS3PathTypeUrl && strings.Count(u.Path, "/") > 1 {
+ store = object.WithPrefix(store, strings.SplitN(u.Path[1:], "/", 2)[1])
+ } else if len(u.Path) > 1 {
+ store = object.WithPrefix(store, u.Path[1:])
+ }
+ default:
+ if len(u.Path) > 1 {
+ store = object.WithPrefix(store, u.Path[1:])
+ }
+ }
+ return store, nil
+}
+
+func isS3PathType(endpoint string) bool {
+ //localhost[:8080] 127.0.0.1[:8080] s3.ap-southeast-1.amazonaws.com[:8080] s3-ap-southeast-1.amazonaws.com[:8080]
+ pattern := `^((localhost)|(s3[.-].*\.amazonaws\.com)|((1\d{2}|2[0-4]\d|25[0-5]|[1-9]\d|[1-9])\.((1\d{2}|2[0-4]\d|25[0-5]|[1-9]\d|\d)\.){2}(1\d{2}|2[0-4]\d|25[0-5]|[1-9]\d|\d)))?(:\d*)?$`
+ return regexp.MustCompile(pattern).MatchString(endpoint)
+}
+
+const USAGE = `juicefs [options] sync [options] SRC DST
+SRC and DST should be [NAME://][ACCESS_KEY:SECRET_KEY@]BUCKET[.ENDPOINT][/PREFIX]`
+
+func doSync(c *cli.Context) error {
+ setLoggerLevel(c)
+
+ if c.Args().Len() != 2 {
+ logger.Errorf(USAGE)
+ return nil
+ }
+ config := sync.NewConfigFromCli(c)
+ go func() { _ = http.ListenAndServe(fmt.Sprintf("127.0.0.1:%d", config.HTTPPort), nil) }()
+
+ // Windows support `\` and `/` as its separator, Unix only use `/`
+ srcURL := strings.Replace(c.Args().Get(0), "\\", "/", -1)
+ dstURL := strings.Replace(c.Args().Get(1), "\\", "/", -1)
+ if strings.HasSuffix(srcURL, "/") != strings.HasSuffix(dstURL, "/") {
+ logger.Fatalf("SRC and DST should both end with path separator or not!")
+ }
+ src, err := createSyncStorage(srcURL, config)
+ if err != nil {
+ return err
+ }
+ dst, err := createSyncStorage(dstURL, config)
+ if err != nil {
+ return err
+ }
+ return sync.Sync(src, dst, config)
+}
+
+func syncFlags() *cli.Command {
+ return &cli.Command{
+ Name: "sync",
+ Usage: "sync between two storage",
+ ArgsUsage: "SRC DST",
+ Action: doSync,
+ Flags: []cli.Flag{
+ &cli.StringFlag{
+ Name: "start",
+ Aliases: []string{"s"},
+ Value: "",
+ Usage: "the first `KEY` to sync",
+ },
+ &cli.StringFlag{
+ Name: "end",
+ Aliases: []string{"e"},
+ Value: "",
+ Usage: "the last `KEY` to sync",
+ },
+ &cli.IntFlag{
+ Name: "threads",
+ Aliases: []string{"p"},
+ Value: 10,
+ Usage: "number of concurrent threads",
+ },
+ &cli.IntFlag{
+ Name: "http-port",
+ Value: 6070,
+ Usage: "HTTP `PORT` to listen to",
+ },
+ &cli.BoolFlag{
+ Name: "update",
+ Aliases: []string{"u"},
+ Usage: "skip files if the destination is newer",
+ },
+ &cli.BoolFlag{
+ Name: "force-update",
+ Aliases: []string{"f"},
+ Usage: "always update existing files",
+ },
+ &cli.BoolFlag{
+ Name: "perms",
+ Usage: "preserve permissions",
+ },
+ &cli.BoolFlag{
+ Name: "dirs",
+ Usage: "Sync directories or holders",
+ },
+ &cli.BoolFlag{
+ Name: "dry",
+ Usage: "Don't copy file",
+ },
+ &cli.BoolFlag{
+ Name: "delete-src",
+ Aliases: []string{"deleteSrc"},
+ Usage: "delete objects from source those already exist in destination",
+ },
+ &cli.BoolFlag{
+ Name: "delete-dst",
+ Aliases: []string{"deleteDst"},
+ Usage: "delete extraneous objects from destination",
+ },
+ &cli.StringSliceFlag{
+ Name: "exclude",
+ Usage: "exclude keys containing `PATTERN` (POSIX regular expressions)",
+ },
+ &cli.StringSliceFlag{
+ Name: "include",
+ Usage: "only include keys containing `PATTERN` (POSIX regular expressions)",
+ },
+ &cli.StringFlag{
+ Name: "manager",
+ Usage: "manager address",
+ },
+ &cli.StringSliceFlag{
+ Name: "worker",
+ Usage: "hosts (seperated by comma) to launch worker",
+ },
+ &cli.IntFlag{
+ Name: "bwlimit",
+ Usage: "limit bandwidth in Mbps (0 means unlimited)",
+ },
+ &cli.BoolFlag{
+ Name: "no-https",
+ Usage: "donot use HTTPS",
+ },
+ &cli.BoolFlag{
+ Name: "check-all",
+ Usage: "verify integrity of all files in source and destination",
+ },
+ &cli.BoolFlag{
+ Name: "check-new",
+ Usage: "verify integrity of newly copied files",
+ },
+ },
+ }
+}
diff --git a/cmd/sync_test.go b/cmd/sync_test.go
new file mode 100644
index 0000000..78dea79
--- /dev/null
+++ b/cmd/sync_test.go
@@ -0,0 +1,94 @@
+/*
+ * JuiceFS, Copyright 2021 Juicedata, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package main
+
+import (
+ "bytes"
+ "fmt"
+ "io/ioutil"
+ "os"
+ "testing"
+
+ "github.com/juicedata/juicefs/pkg/object"
+)
+
+func TestSync(t *testing.T) {
+ if os.Getenv("MINIO_TEST_BUCKET") == "" {
+ t.Skip()
+ }
+ minioDir := "synctest"
+ localDir := "/tmp/synctest"
+ defer os.RemoveAll(localDir)
+ storage, err := object.CreateStorage("minio", os.Getenv("MINIO_TEST_BUCKET"), os.Getenv("MINIO_ACCESS_KEY"), os.Getenv("MINIO_SECRET_KEY"))
+ if err != nil {
+ t.Fatalf("create storage failed: %v", err)
+ }
+
+ testInstances := []struct{ path, content string }{
+ {"t1.txt", "content1"},
+ {"testDir1/t2.txt", "content2"},
+ {"testDir1/testDir3/t3.txt", "content3"},
+ }
+
+ for _, instance := range testInstances {
+ err = storage.Put(fmt.Sprintf("/%s/%s", minioDir, instance.path), bytes.NewReader([]byte(instance.content)))
+ if err != nil {
+ t.Fatalf("storage put failed: %v", err)
+ }
+ }
+ syncArgs := []string{"", "sync", fmt.Sprintf("minio://%s/%s", os.Getenv("MINIO_TEST_BUCKET"), minioDir), fmt.Sprintf("file://%s", localDir)}
+ err = Main(syncArgs)
+ if err != nil {
+ t.Fatalf("sync failed: %v", err)
+ }
+
+ for _, instance := range testInstances {
+ c, err := ioutil.ReadFile(fmt.Sprintf("%s/%s", localDir, instance.path))
+ if err != nil || string(c) != instance.content {
+ t.Fatalf("sync failed: %v", err)
+ }
+ }
+}
+
+func Test_isS3PathType(t *testing.T) {
+
+ tests := []struct {
+ endpoint string
+ want bool
+ }{
+ {"localhost", true},
+ {"localhost:8080", true},
+ {"127.0.0.1", true},
+ {"127.0.0.1:8080", true},
+ {"s3.ap-southeast-1.amazonaws.com", true},
+ {"s3.ap-southeast-1.amazonaws.com:8080", true},
+ {"s3-ap-southeast-1.amazonaws.com", true},
+ {"s3-ap-southeast-1.amazonaws.com:8080", true},
+ {"s3-ap-southeast-1.amazonaws..com:8080", false},
+ {"ap-southeast-1.amazonaws.com", false},
+ {"s3-ap-southeast-1amazonaws.com:8080", false},
+ {"s3-ap-southeast-1", false},
+ {"s3-ap-southeast-1:8080", false},
+ }
+ for _, tt := range tests {
+ t.Run("Test host", func(t *testing.T) {
+ if got := isS3PathType(tt.endpoint); got != tt.want {
+ t.Errorf("isS3PathType() = %v, want %v", got, tt.want)
+ }
+ })
+ }
+}
diff --git a/cmd/umount.go b/cmd/umount.go
new file mode 100644
index 0000000..c94ced1
--- /dev/null
+++ b/cmd/umount.go
@@ -0,0 +1,93 @@
+/*
+ * JuiceFS, Copyright 2018 Juicedata, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package main
+
+import (
+ "fmt"
+ "log"
+ "os"
+ "os/exec"
+ "path/filepath"
+ "runtime"
+
+ "github.com/urfave/cli/v2"
+)
+
+func umountFlags() *cli.Command {
+ return &cli.Command{
+ Name: "umount",
+ Usage: "unmount a volume",
+ ArgsUsage: "MOUNTPOINT",
+ Action: umount,
+ Flags: []cli.Flag{
+ &cli.BoolFlag{
+ Name: "force",
+ Aliases: []string{"f"},
+ Usage: "unmount a busy mount point by force",
+ },
+ },
+ }
+}
+
+func doUmount(mp string, force bool) error {
+ var cmd *exec.Cmd
+ switch runtime.GOOS {
+ case "darwin":
+ if force {
+ cmd = exec.Command("diskutil", "umount", "force", mp)
+ } else {
+ cmd = exec.Command("diskutil", "umount", mp)
+ }
+ case "linux":
+ if _, err := exec.LookPath("fusermount"); err == nil {
+ if force {
+ cmd = exec.Command("fusermount", "-uz", mp)
+ } else {
+ cmd = exec.Command("fusermount", "-u", mp)
+ }
+ } else {
+ if force {
+ cmd = exec.Command("umount", "-l", mp)
+ } else {
+ cmd = exec.Command("umount", mp)
+ }
+ }
+ case "windows":
+ if !force {
+ _ = os.Mkdir(filepath.Join(mp, ".UMOUNTIT"), 0755)
+ return nil
+ } else {
+ cmd = exec.Command("taskkill", "/IM", "juicefs.exe", "/F")
+ }
+ default:
+ return fmt.Errorf("OS %s is not supported", runtime.GOOS)
+ }
+ out, err := cmd.CombinedOutput()
+ if err != nil {
+ log.Print(string(out))
+ }
+ return err
+}
+
+func umount(ctx *cli.Context) error {
+ if ctx.Args().Len() < 1 {
+ return fmt.Errorf("MOUNTPOINT is needed")
+ }
+ mp := ctx.Args().Get(0)
+ force := ctx.Bool("force")
+ return doUmount(mp, force)
+}
diff --git a/cmd/umount_test.go b/cmd/umount_test.go
new file mode 100644
index 0000000..985903e
--- /dev/null
+++ b/cmd/umount_test.go
@@ -0,0 +1,51 @@
+/*
+ * JuiceFS, Copyright 2021 Juicedata, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package main
+
+import (
+ "testing"
+
+ "github.com/juicedata/juicefs/pkg/utils"
+)
+
+func UmountTmp(mountpoint string) error {
+ umountArgs := []string{"", "umount", mountpoint}
+ return Main(umountArgs)
+}
+
+func TestUmount(t *testing.T) {
+
+ metaUrl := "redis://127.0.0.1:6379/10"
+ mountpoint := "/tmp/testDir"
+ if err := MountTmp(metaUrl, mountpoint); err != nil {
+ t.Fatalf("mount failed: %v", err)
+ }
+
+ err := UmountTmp(mountpoint)
+ if err != nil {
+ t.Fatalf("umount failed: %v", err)
+ }
+
+ inode, err := utils.GetFileInode(mountpoint)
+ if err != nil {
+ t.Fatalf("get file inode failed: %v", err)
+ }
+ if inode == 1 {
+ t.Fatalf("umount failed: %v", err)
+ }
+
+}
diff --git a/cmd/warmup.go b/cmd/warmup.go
new file mode 100644
index 0000000..fbd76e6
--- /dev/null
+++ b/cmd/warmup.go
@@ -0,0 +1,175 @@
+/*
+ * JuiceFS, Copyright 2021 Juicedata, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package main
+
+import (
+ "bufio"
+ "os"
+ "path/filepath"
+ "strings"
+
+ "github.com/juicedata/juicefs/pkg/meta"
+ "github.com/juicedata/juicefs/pkg/utils"
+ "github.com/urfave/cli/v2"
+)
+
+const batchMax = 10240
+
+// send fill-cache command to controller file
+func sendCommand(cf *os.File, batch []string, count int, threads uint, background bool) {
+ paths := strings.Join(batch[:count], "\n")
+ var back uint8
+ if background {
+ back = 1
+ }
+ wb := utils.NewBuffer(8 + 4 + 3 + uint32(len(paths)))
+ wb.Put32(meta.FillCache)
+ wb.Put32(4 + 3 + uint32(len(paths)))
+ wb.Put32(uint32(len(paths)))
+ wb.Put([]byte(paths))
+ wb.Put16(uint16(threads))
+ wb.Put8(back)
+ if _, err := cf.Write(wb.Bytes()); err != nil {
+ logger.Fatalf("Write message: %s", err)
+ }
+ if background {
+ logger.Infof("Warm-up cache for %d paths in backgroud", count)
+ return
+ }
+ var errs = make([]byte, 1)
+ if n, err := cf.Read(errs); err != nil || n != 1 {
+ logger.Fatalf("Read message: %d %s", n, err)
+ }
+ if errs[0] != 0 {
+ logger.Fatalf("Warm up failed: %d", errs[0])
+ }
+}
+
+func warmup(ctx *cli.Context) error {
+ fname := ctx.String("file")
+ paths := ctx.Args().Slice()
+ if fname != "" {
+ fd, err := os.Open(fname)
+ if err != nil {
+ logger.Fatalf("Failed to open file %s: %s", fname, err)
+ }
+ defer fd.Close()
+ scanner := bufio.NewScanner(fd)
+ for scanner.Scan() {
+ if p := strings.TrimSpace(scanner.Text()); p != "" {
+ paths = append(paths, p)
+ }
+ }
+ if err := scanner.Err(); err != nil {
+ logger.Fatalf("Reading file %s failed with error: %s", fname, err)
+ }
+ }
+ if len(paths) == 0 {
+ logger.Infof("Nothing to warm up")
+ return nil
+ }
+
+ // find mount point
+ first, err := filepath.Abs(paths[0])
+ if err != nil {
+ logger.Fatalf("Failed to get abs of %s: %s", paths[0], err)
+ }
+ st, err := os.Stat(first)
+ if err != nil {
+ logger.Fatalf("Failed to stat path %s: %s", first, err)
+ }
+ var mp string
+ if st.IsDir() {
+ mp = first
+ } else {
+ mp = filepath.Dir(first)
+ }
+ for ; mp != "/"; mp = filepath.Dir(mp) {
+ inode, err := utils.GetFileInode(mp)
+ if err != nil {
+ logger.Fatalf("Failed to lookup inode for %s: %s", mp, err)
+ }
+ if inode == 1 {
+ break
+ }
+ }
+ if mp == "/" {
+ logger.Fatalf("Path %s is not inside JuiceFS", first)
+ }
+
+ controller := openController(mp)
+ if controller == nil {
+ logger.Fatalf("Failed to open control file under %s", mp)
+ }
+ defer controller.Close()
+
+ threads := ctx.Uint("threads")
+ background := ctx.Bool("background")
+ start := len(mp)
+ batch := make([]string, batchMax)
+ progress := utils.NewProgress(background, false)
+ bar := progress.AddCountBar("Warmed up paths", int64(len(paths)))
+ var index int
+ for _, path := range paths {
+ if strings.HasPrefix(path, mp) {
+ batch[index] = path[start:]
+ index++
+ } else {
+ logger.Warnf("Path %s is not under mount point %s", path, mp)
+ continue
+ }
+ if index >= batchMax {
+ sendCommand(controller, batch, index, threads, background)
+ bar.IncrBy(index)
+ index = 0
+ }
+ }
+ if index > 0 {
+ sendCommand(controller, batch, index, threads, background)
+ bar.IncrBy(index)
+ }
+ progress.Done()
+
+ return nil
+}
+
+func warmupFlags() *cli.Command {
+ return &cli.Command{
+ Name: "warmup",
+ Usage: "build cache for target directories/files",
+ ArgsUsage: "[PATH ...]",
+ Action: warmup,
+ Flags: []cli.Flag{
+ &cli.StringFlag{
+ Name: "file",
+ Aliases: []string{"f"},
+ Usage: "file containing a list of paths",
+ },
+ &cli.UintFlag{
+ Name: "threads",
+ Aliases: []string{"p"},
+ Value: 50,
+ Usage: "number of concurrent workers",
+ },
+ &cli.BoolFlag{
+ Name: "background",
+ Aliases: []string{"b"},
+ Usage: "run in background",
+ },
+ },
+ }
+}
diff --git a/cmd/warmup_test.go b/cmd/warmup_test.go
new file mode 100644
index 0000000..dc1ed74
--- /dev/null
+++ b/cmd/warmup_test.go
@@ -0,0 +1,82 @@
+/*
+ * JuiceFS, Copyright 2021 Juicedata, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package main
+
+import (
+ "fmt"
+ "io/ioutil"
+ "os"
+ "runtime"
+ "testing"
+ "time"
+
+ "github.com/juicedata/juicefs/pkg/meta"
+)
+
+func TestWarmup(t *testing.T) {
+ metaUrl := "redis://127.0.0.1:6379/10"
+ mountpoint := "/tmp/testDir"
+ defer ResetRedis(metaUrl)
+ if err := MountTmp(metaUrl, mountpoint); err != nil {
+ t.Fatalf("mount failed: %v", err)
+ }
+ defer func() {
+ err := UmountTmp(mountpoint)
+ if err != nil {
+ t.Fatalf("umount failed: %v", err)
+ }
+ }()
+
+ err := ioutil.WriteFile(fmt.Sprintf("%s/f1.txt", mountpoint), []byte("test"), 0644)
+ if err != nil {
+ t.Fatalf("test mount failed: %v", err)
+ }
+ m := meta.NewClient(metaUrl, &meta.Config{Retries: 10, Strict: true})
+ format, err := m.Load()
+ if err != nil {
+ t.Fatalf("load setting err: %s", err)
+ }
+ uuid := format.UUID
+ var cacheDir string
+ var filePath string
+ switch runtime.GOOS {
+ case "darwin", "windows":
+ homeDir, err := os.UserHomeDir()
+ if err != nil {
+ t.Fatalf("%v", err)
+ }
+ cacheDir = fmt.Sprintf("%s/.juicefs/cache", homeDir)
+ default:
+ cacheDir = "/var/jfsCache"
+ }
+
+ os.RemoveAll(fmt.Sprintf("%s/%s", cacheDir, uuid))
+ defer os.RemoveAll(fmt.Sprintf("%s/%s", cacheDir, uuid))
+
+ warmupArgs := []string{"", "warmup", mountpoint}
+ err = Main(warmupArgs)
+ if err != nil {
+ t.Fatalf("warmup error: %v", err)
+ }
+
+ time.Sleep(2 * time.Second)
+ filePath = fmt.Sprintf("%s/%s/raw/chunks/0/0/1_0_4", cacheDir, uuid)
+ content, err := ioutil.ReadFile(filePath)
+ if err != nil || string(content) != "test" {
+ t.Fatalf("warmup error:%v", err)
+ }
+}
diff --git a/deploy/juicefs-s3-gateway.yaml b/deploy/juicefs-s3-gateway.yaml
new file mode 100644
index 0000000..75d4c4d
--- /dev/null
+++ b/deploy/juicefs-s3-gateway.yaml
@@ -0,0 +1,91 @@
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: juicefs-s3-gateway
+ namespace: kube-system
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: juicefs-s3-gateway
+ template:
+ metadata:
+ labels:
+ app.kubernetes.io/name: juicefs-s3-gateway
+ spec:
+ initContainers:
+ - name: format
+ image: juicedata/juicefs-csi-driver:v0.12.0
+ command:
+ - sh
+ - -c
+ - juicefs format --storage=${storage} --bucket=${bucket} --access-key=${accesskey} --secret-key=${secretkey} ${metaurl} ${name}
+ envFrom:
+ - secretRef:
+ name: juicefs-secret
+ env:
+ - name: accesskey
+ valueFrom:
+ secretKeyRef:
+ name: juicefs-secret
+ key: access-key
+ - name: secretkey
+ valueFrom:
+ secretKeyRef:
+ name: juicefs-secret
+ key: secret-key
+ containers:
+ - name: gateway
+ image: juicedata/juicefs-csi-driver:v0.11.1
+ command:
+ - sh
+ - -c
+ - juicefs gateway ${METAURL} ${NODE_IP}:9000 --metrics=${NODE_IP}:9567
+ env:
+ - name: NODE_IP
+ valueFrom:
+ fieldRef:
+ fieldPath: status.podIP
+ - name: METAURL
+ valueFrom:
+ secretKeyRef:
+ name: juicefs-secret
+ key: metaurl
+ - name: MINIO_ROOT_USER
+ valueFrom:
+ secretKeyRef:
+ name: juicefs-secret
+ key: access-key
+ - name: MINIO_ROOT_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: juicefs-secret
+ key: secret-key
+ ports:
+ - containerPort: 9000
+ - containerPort: 9567
+ resources:
+ limits:
+ cpu: 5000m
+ memory: 5Gi
+ requests:
+ cpu: 1000m
+ memory: 1Gi
+---
+apiVersion: v1
+kind: Service
+metadata:
+ name: juicefs-s3-gateway
+ namespace: kube-system
+ labels:
+ app.kubernetes.io/name: juicefs-s3-gateway
+spec:
+ selector:
+ app.kubernetes.io/name: juicefs-s3-gateway
+ ports:
+ - name: http
+ port: 9000
+ targetPort: 9000
+ - name: metrics
+ port: 9567
+ targetPort: 9567
diff --git a/docs/README.md b/docs/README.md
new file mode 100644
index 0000000..9545c21
--- /dev/null
+++ b/docs/README.md
@@ -0,0 +1,10 @@
+# JuiceFS User Manual
+
+For reading on GitHub, please select your language:
+
+- [🇬🇧 English](en/README.md)
+- [🇨🇳 简体中文](zh_cn/README.md)
+
+Or, visit the JuiceFS Documentation Center at
+
+[🌍 https://juicefs.com/docs/](https://juicefs.com/docs/)
\ No newline at end of file
diff --git a/docs/en/README.md b/docs/en/README.md
new file mode 100644
index 0000000..e7f18ce
--- /dev/null
+++ b/docs/en/README.md
@@ -0,0 +1,22 @@
+# JuiceFS User Manual
+
+[![license](https://img.shields.io/badge/license-Apache%20v2.0-blue)](https://github.com/juicedata/juicefs/blob/main/LICENSE) [![Go Report](https://img.shields.io/badge/go%20report-A+-brightgreen.svg?style=flat)](https://goreportcard.com/badge/github.com/juicedata/juicefs) [![Join Slack](https://badgen.net/badge/Slack/Join%20JuiceFS/0abd59?icon=slack)](https://join.slack.com/t/juicefs/shared_invite/zt-n9h5qdxh-0bJojPaql8cfFgwerDQJgA)
+
+![JuiceFS LOGO](images/juicefs-logo.png)
+
+JuiceFS is a high-performance [POSIX](https://en.wikipedia.org/wiki/POSIX) file system released under Apache License 2.0. It is specially optimized for the cloud-native environment. Using the JuiceFS to store data, the data itself will be persisted in object storage (e.g. Amazon S3), and the metadata corresponding to the data can be persisted in various database engines such as Redis, MySQL, and SQLite according to the needs of the scene.
+
+JuiceFS can simply and conveniently connect massive cloud storage directly to big data, machine learning, artificial intelligence, and various application platforms that have been put into production environment, without modifying the code, you can use massive cloud storage as efficiently as using local storage.
+
+## Highlighted Features
+
+1. **Fully POSIX-compatible**: Use like a local file system, seamlessly docking with existing applications, no business intrusion.
+2. **Fully Hadoop-compatible**: JuiceFS [Hadoop Java SDK](deployment/hadoop_java_sdk.md) is compatible with Hadoop 2.x and Hadoop 3.x. As well as variety of components in Hadoop ecosystem.
+3. **S3-compatible**: JuiceFS [S3 Gateway](deployment/s3_gateway.md) provides S3-compatible interface.
+4. **Cloud Native**: JuiceFS provides [Kubernetes CSI driver](deployment/how_to_use_on_kubernetes.md) to help people who want to use JuiceFS in Kubernetes.
+5. **Sharing**: JuiceFS is a shared file storage that can be read and written by thousands clients.
+6. **Strong Consistency**: The confirmed modification will be immediately visible on all servers mounted with the same file system .
+7. **Outstanding Performance**: The latency can be as low as a few milliseconds and the throughput can be expanded to nearly unlimited. [Test results](benchmark/benchmark.md)
+8. **Data Encryption**: Supports data encryption in transit and at rest, read [the guide](security/encrypt.md) for more information.
+9. **Global File Locks**: JuiceFS supports both BSD locks (flock) and POSIX record locks (fcntl).
+10. **Data Compression**: JuiceFS supports use [LZ4](https://lz4.github.io/lz4) or [Zstandard](https://facebook.github.io/zstd) to compress all your data.
diff --git a/docs/en/administration/cache_management.md b/docs/en/administration/cache_management.md
new file mode 100644
index 0000000..d74b4b7
--- /dev/null
+++ b/docs/en/administration/cache_management.md
@@ -0,0 +1,109 @@
+---
+sidebar_label: Cache
+sidebar_position: 5
+slug: /cache_management
+---
+# Cache
+
+For a file system driven by a combination of object storage and database, the cache is an important medium for efficient interaction between the local client and the remote service. Read and write data can be loaded into the cache in advance or asynchronously, and then the client interacts with the remote service in the background to perform asynchronous uploads or prefetching of data. The use of caching technology can significantly reduce the latency of storage operations and increase data throughput compared to interacting with remote services directly.
+
+JuiceFS provides various caching mechanisms including metadata caching, data read/write caching, etc.
+
+## Data Consistency
+
+JuiceFS provides a "close-to-open" consistency guarantee, which means that when two or more clients read and write the same file at the same time, the changes made by client A may not be immediately visible to client B. However, once the file is closed by client A, any client re-opened it afterwards is guaranteed to see the latest data, no matter it is on the same node with A or not.
+
+"Close-to-open" is the minimum consistency guarantee provided by JuiceFS, and in some cases it may not be necessary to reopen the file to access the latest written data. For example, multiple applications using the same JuiceFS client to access the same file (where file changes are immediately visible), or to view the latest data on different nodes with the `tail -f` command.
+
+## Metadata Cache
+
+JuiceFS supports caching metadata in kernel and client memory (i.e. JuiceFS processes) to improve metadata access performance.
+
+### Metadata Cache in Kernel
+
+Three kinds of metadata can be cached in kernel: **attributes (attribute)**, **file entries (entry)** and **directory entries (direntry)**. The cache timeout can be controlled by the following [mount parameter](../reference/command_reference.md#juicefs-mount):
+
+```
+--attr-cache value attributes cache timeout in seconds (default: 1)
+--entry-cache value file entry cache timeout in seconds (default: 1)
+--dir-entry-cache value dir entry cache timeout in seconds (default: 1)
+```
+
+JuiceFS caches attributes, file entries, and directory entries in kernel for 1 second by default to improve lookup and getattr performance. When clients on multiple nodes are using the same file system, the metadata cached in kernel will only be expired by time. That is, in an extreme case, it may happen that node A modifies the metadata of a file (e.g., `chown`) and accesses it through node B without immediately seeing the update. Of course, when the cache expires, all nodes will eventually be able to see the changes made by A.
+
+### Metadata Cache in Client
+
+> **Note**: This feature requires JuiceFS >= 0.15.2.
+
+When a JuiceFS client `open()` a file, its file attributes are automatically cached in client memory. If the [`--open-cache`](../reference/command_reference.md#juicefs-mount) option is set to a value greater than 0 when mounting the file system, subsequent `getattr()` and `open()` operations will return the result from the in-memory cache immediately, as long as the cache has not timed out.
+
+When a file is read by `read()`, the chunk and slice information of the file is automatically cached in client memory. Reading the chunk again during the cache lifetime will return the slice information from the in-memory cache immediately.
+
+> **Hint**: You can check ["How JuiceFS Stores Files"](../reference/how_juicefs_store_files.md) to know what chunk and slice are.
+
+By default, for any file whose metadata has been cached in memory and not accessed by any process for more than 1 hour, all its metadata cache will be automatically deleted.
+
+## Data Cache
+
+Data cache is also provided in JuiceFS to improve performance, including page cache in the kernel and local cache in client host.
+
+### Data Cache in Kernel
+
+> **Note**: This feature requires JuiceFS >= 0.15.2.
+
+For files that have already been read, the kernel automatically caches their contents. Then if the file is opened again, and it's not changed (i.e., mtime has not been updated), it can be read directly from the kernel cache for the best performance.
+
+Thanks to the kernel cache, repeated reads of the same file in JuiceFS can be very fast, with latencies as low as microseconds and throughputs up to several GiBs per second.
+
+JuiceFS clients currently do not have kernel write caching enabled by default, starting with [Linux kernel 3.15](https://github.com/torvalds/linux/commit/4d99ff8f12e), FUSE supports ["writeback-cache mode"]( https://www.kernel.org/doc/Documentation/filesystems/fuse-io.txt), which means that the `write()` system call can be done very quickly. You can set the [`-o writeback_cache`](../reference/fuse_mount_options.md#writeback_cache) option at [mount file system](../reference/command_reference.md#juicefs-mount) to enable writeback-cache mode. It is recommended to enable this mount option when very small data (e.g. around 100 bytes) needs to be written frequently.
+
+### Read Cache in Client
+
+The JuiceFS client automatically prefetch data into the cache based on the read pattern, thus improving sequential read performance. By default, 1 block is prefetch locally concurrently with the read data. The local cache can be set on any local file system based on HDD, SSD or memory.
+
+> **Hint**: You can check ["How JuiceFS Stores Files"](../reference/how_juicefs_store_files.md) to learn what a block is.
+
+The local cache can be adjusted at [mount file system](../reference/command_reference.md#juicefs-mount) with the following options.
+
+```
+--cache-dir value directory paths of local cache, use colon to separate multiple paths (default: "$HOME/.juicefs/cache" or "/var/jfsCache")
+--cache-size value size of cached objects in MiB (default: 102400)
+--free-space-ratio value min free space (ratio) (default: 0.1)
+--cache-partial-only cache only random/small read (default: false)
+```
+
+Specifically, there are two ways if you want to store the local cache of JuiceFS in memory, one is to set `--cache-dir` to `memory` and the other is to set it to `/dev/shm/`. The difference between these two approaches is that the former deletes the cache data after remounting the JuiceFS file system, while the latter retains it, and there is not much difference in performance between the two.
+
+The JuiceFS client writes data downloaded from the object store (including new uploads less than 1 block in size) to the cache directory as fast as possible, without compression or encryption. **Because JuiceFS generates unique names for all block objects written to the object store, and all block objects are not modified, there is no need to worry about invalidating the cached data when the file content is updated.**
+
+The cache is automatically purged when it reaches the maximum space used (i.e., the cache size is greater than or equal to `--cache-size`) or when the disk is going to be full (i.e., the disk free space ratio is less than `--free-space-ratio`), and the current rule is to prioritize purging infrequently accessed files based on access time.
+
+Data caching can effectively improve the performance of random reads. For applications like Elasticsearch, ClickHouse, etc. that require higher random read performance, it is recommended to set the cache path on a faster storage medium and allocate more cache space.
+
+### Write Cache in Client
+
+When writing data, the JuiceFS client caches the data in memory until it is uploaded to the object storage when a chunk is written or when the operation is forced by `close()` or `fsync()`. When `fsync()` or `close()` is called, the client waits for data to be written to the object storage and notifies the metadata service before returning, thus ensuring data integrity.
+
+In some cases where local storage is reliable and local storage write performance is significantly better than network writes (e.g. SSD disks), write performance can be improved by enabling asynchronous upload of data so that the `close()` operation does not wait for data to be written to the object storage, but returns as soon as the data is written to the local cache directory.
+
+The asynchronous upload feature is disabled by default and can be enabled with the following options.
+
+```
+--writeback upload objects in background (default: false)
+```
+
+When writing a large number of small files in a short period of time, it is recommended to mount the file system with the `--writeback` parameter to improve write performance, and consider re-mounting without the option after the write is complete to make subsequent writes more reliable. It is also recommended to enable `--writeback` for scenarios that require a lot of random writes, such as incremental backups of MySQL.
+
+> **Warning**: When asynchronous upload is enabled, i.e. `--writeback` is specified when mounting the file system, do not delete the contents in `//rawstaging` directory, as this will result in data loss.
+
+When the cache disk is too full, it will pause writing data and change to uploading data directly to the object storage (i.e., the client write cache feature is turned off).
+
+When the asynchronous upload function is enabled, the reliability of the cache itself is directly related to the reliability of data writing, and should be used with caution for scenarios requiring high data reliability.
+
+## Frequent Asked Questions
+
+### Why 60 GiB disk spaces are occupied while I set cache size to 50 GiB?
+
+JuiceFS currently estimates the size of cached objects by adding up the size of all cached objects and adding a fixed overhead (4KiB), which is not exactly the same as the value obtained by the `du` command.
+
+To prevent the cache disk from being written to full, the client will try to reduce the cache usage when the file system where the cache directory is located is running out of space.
diff --git a/docs/en/administration/destroy.md b/docs/en/administration/destroy.md
new file mode 100644
index 0000000..b59086b
--- /dev/null
+++ b/docs/en/administration/destroy.md
@@ -0,0 +1,77 @@
+# How to destroy a file system
+
+JuiceFS client provides the `destroy` command to completely destroy a file system, which will result in the following.
+
+- Delete all metadata entries of this file system
+- Deletes all data blocks of the file system
+
+The command to destroy a file system is as follows.
+
+```shell
+juicefs destroy
+```
+
+- ``: The URL address of the metadata engine
+- ``: The UUID of the file system
+
+## Find the UUID of the file system
+
+The `status` command on the JuiceFS client can view detailed information about a file system by simply specifying the file system's metadata engine URL, e.g.
+
+```shell {7}
+$ juicefs status redis://127.0.0.1:6379/1
+
+2022/01/26 21:41:37.577645 juicefs[31181] : Meta address: redis://127.0.0.1:6379/1
+2022/01/26 21:41:37.578238 juicefs[31181] : Ping redis: 55.041µs
+{
+ "Setting": {
+ "Name": "macjfs",
+ "UUID": "eabb96d5-7228-461e-9240-fddbf2b576d8",
+ "Storage": "file",
+ "Bucket": "jfs/",
+ "AccessKey": "",
+ "BlockSize": 4096,
+ "Compression": "none",
+ "Shards": 0,
+ "Partitions": 0,
+ "Capacity": 0,
+ "Inodes": 0,
+ "TrashDays": 1
+ },
+ ...
+}
+```
+
+## Destroy a file system
+
+:::danger
+The destroy operation will cause all the data in the database records and object storage associated with the file system to be deleted, please make sure to backup the important data first before operation!
+:::
+
+```shell
+$ juicefs destroy redis://127.0.0.1:6379/1 eabb96d5-7228-461e-9240-fddbf2b576d8
+
+2022/01/26 21:52:17.488987 juicefs[31518] : Meta address: redis://127.0.0.1:6379/1
+2022/01/26 21:52:17.489668 juicefs[31518] : Ping redis: 55.542µs
+ volume name: macjfs
+ volume UUID: eabb96d5-7228-461e-9240-fddbf2b576d8
+data storage: file://jfs/
+ used bytes: 18620416
+ used inodes: 23
+WARNING: The target volume will be destoried permanently, including:
+WARNING: 1. objects in the data storage
+WARNING: 2. entries in the metadata engine
+Proceed anyway? [y/N]: y
+deleting objects: 68
+The volume has been destroyed! You may need to delete cache directory manually.
+```
+
+When destroying a file system, the client will issue a confirmation prompt. Please make sure to check the file system information carefully and enter `y` after confirming it is correct.
+
+## FAQ
+
+```shell
+2022/01/26 21:47:30.949149 juicefs[31483] : 1 sessions are active, please disconnect them first
+```
+
+If you receive an error like the one above, which indicates that the file system has not been properly unmounted, please check and confirm that all mount points are unmounted before proceeding.
diff --git a/docs/en/administration/fault_diagnosis_and_analysis.md b/docs/en/administration/fault_diagnosis_and_analysis.md
new file mode 100644
index 0000000..e7ac2be
--- /dev/null
+++ b/docs/en/administration/fault_diagnosis_and_analysis.md
@@ -0,0 +1,115 @@
+---
+sidebar_label: Fault Diagnosis and Analysis
+sidebar_position: 9
+slug: /fault_diagnosis_and_analysis
+---
+
+# Fault Diagnosis and Analysis
+
+## Error Log
+
+When JuiceFS run in background (through [`-d` option](../reference/command_reference.md#juicefs-mount) when mount volume), logs will output to syslog and `/var/log/juicefs.log` (v0.15+, refer to [`--log` option](../reference/command_reference.md#juicefs-mount)). Depending on your operating system, you can get the logs through different commands:
+
+```bash
+# macOS
+$ syslog | grep 'juicefs'
+
+# Debian based system
+$ cat /var/log/syslog | grep 'juicefs'
+
+# CentOS based system
+$ cat /var/log/messages | grep 'juicefs'
+
+# v0.15+
+$ tail -n 100 /var/log/juicefs.log
+```
+
+There are 4 log levels. You can use the `grep` command to filter different levels of logs for performance analysis or troubleshooting:
+
+```
+$ cat /var/log/syslog | grep 'juicefs' | grep ''
+$ cat /var/log/syslog | grep 'juicefs' | grep ''
+$ cat /var/log/syslog | grep 'juicefs' | grep ''
+$ cat /var/log/syslog | grep 'juicefs' | grep ''
+```
+
+## Access Log
+
+There is a virtual file called `.accesslog` in the root of JuiceFS to show all the operations and the time they takes, for example:
+
+```bash
+$ cat /jfs/.accesslog
+2021.01.15 08:26:11.003330 [uid:0,gid:0,pid:4403] write (17669,8666,4993160): OK <0.000010>
+2021.01.15 08:26:11.003473 [uid:0,gid:0,pid:4403] write (17675,198,997439): OK <0.000014>
+2021.01.15 08:26:11.003616 [uid:0,gid:0,pid:4403] write (17666,390,951582): OK <0.000006>
+```
+
+The last number on each line is the time (in seconds) current operation takes. You can use this to know information of every operation, or try `juicefs profile /jfs` to monitor aggregated statistics. Please run `juicefs profile -h` or refer to [here](../benchmark/operations_profiling.md) to learn more about this subcommand.
+
+## Runtime Information
+
+By default, JuiceFS clients will listen to a TCP port locally via [pprof](https://pkg.go.dev/net/http/pprof) to get runtime information such as Goroutine stack information, CPU performance statistics, memory allocation statistics. You can see the specific port number that the current JuiceFS client is listening on by using the system command (e.g. `lsof`):
+
+:::note
+If JuiceFS is mounted via the root user, then you need to add `sudo` before the `lsof` command.
+:::
+
+```bash
+$ lsof -i -nP | grep LISTEN | grep juicefs
+juicefs 32666 user 8u IPv4 0x44992f0610d9870b 0t0 TCP 127.0.0.1:6061 (LISTEN)
+juicefs 32666 user 9u IPv4 0x44992f0619bf91cb 0t0 TCP 127.0.0.1:6071 (LISTEN)
+juicefs 32666 user 15u IPv4 0x44992f062886fc5b 0t0 TCP 127.0.0.1:9567 (LISTEN)
+```
+
+By default, pprof listens on port numbers starting at 6060 and ending at 6099, so the actual port number in the above example is 6061. Once you get the listening port number, you can view all the available runtime information at `http://localhost:/debug/pprof`, and some important runtime information is as follows:
+
+- Goroutine stack information: `http://localhost:/debug/pprof/goroutine?debug=1`
+- CPU performance statistics: `http://localhost:/debug/pprof/profile?seconds=30`
+- Memory allocation statistics: `http://localhost:/debug/pprof/heap`
+
+To make it easier to analyze this runtime information, you can save it locally, e.g.:
+
+```bash
+$ curl 'http://localhost:/debug/pprof/goroutine?debug=1' > juicefs.goroutine.txt
+$ curl 'http://localhost:/debug/pprof/profile?seconds=30' > juicefs.cpu.pb.gz
+$ curl 'http://localhost:/debug/pprof/heap' > juicefs.heap.pb.gz
+```
+
+If you have the `go` command installed, you can analyze it directly with the `go tool pprof` command, for example to analyze CPU performance statistics:
+
+```bash
+$ go tool pprof 'http://localhost:/debug/pprof/profile?seconds=30'
+Fetching profile over HTTP from http://localhost:/debug/pprof/profile?seconds=30
+Saved profile in /Users/xxx/pprof/pprof.samples.cpu.001.pb.gz
+Type: cpu
+Time: Dec 17, 2021 at 1:41pm (CST)
+Duration: 30.12s, Total samples = 32.06s (106.42%)
+Entering interactive mode (type "help" for commands, "o" for options)
+(pprof) top
+Showing nodes accounting for 30.57s, 95.35% of 32.06s total
+Dropped 285 nodes (cum <= 0.16s)
+Showing top 10 nodes out of 192
+ flat flat% sum% cum cum%
+ 14.73s 45.95% 45.95% 14.74s 45.98% runtime.cgocall
+ 7.39s 23.05% 69.00% 7.41s 23.11% syscall.syscall
+ 2.92s 9.11% 78.10% 2.92s 9.11% runtime.pthread_cond_wait
+ 2.35s 7.33% 85.43% 2.35s 7.33% runtime.pthread_cond_signal
+ 1.13s 3.52% 88.96% 1.14s 3.56% runtime.nanotime1
+ 0.77s 2.40% 91.36% 0.77s 2.40% syscall.Syscall
+ 0.49s 1.53% 92.89% 0.49s 1.53% runtime.memmove
+ 0.31s 0.97% 93.86% 0.31s 0.97% runtime.kevent
+ 0.27s 0.84% 94.70% 0.27s 0.84% runtime.usleep
+ 0.21s 0.66% 95.35% 0.21s 0.66% runtime.madvise
+```
+
+Runtime information can also be exported to visual charts for a more intuitive analysis. The visual charts support exporting to various formats such as HTML, PDF, SVG, PNG, etc. For example, the command to export memory allocation statistics as a PDF file is as follows:
+
+:::note
+The export to visual chart function relies on [Graphviz](https://graphviz.org), so please install it first.
+:::
+
+```bash
+$ go tool pprof -pdf 'http://localhost:/debug/pprof/heap' > juicefs.heap.pdf
+```
+
+For more information about pprof, please see the [official documentation](https://github.com/google/pprof/blob/master/doc/README.md).
diff --git a/docs/en/administration/metadata/_mysql_best_practices.md b/docs/en/administration/metadata/_mysql_best_practices.md
new file mode 100644
index 0000000..459d058
--- /dev/null
+++ b/docs/en/administration/metadata/_mysql_best_practices.md
@@ -0,0 +1,5 @@
+---
+sidebar_label: MySQL Best Practices
+sidebar_position: 2
+---
+# MySQL Best Practices
\ No newline at end of file
diff --git a/docs/en/administration/metadata/_tikv_best_practices.md b/docs/en/administration/metadata/_tikv_best_practices.md
new file mode 100644
index 0000000..146b3be
--- /dev/null
+++ b/docs/en/administration/metadata/_tikv_best_practices.md
@@ -0,0 +1,5 @@
+---
+sidebar_label: TiKV Best Practices
+sidebar_position: 3
+---
+# TiKV Best Practices
\ No newline at end of file
diff --git a/docs/en/administration/metadata/postgresql_best_practices.md b/docs/en/administration/metadata/postgresql_best_practices.md
new file mode 100644
index 0000000..ae5c551
--- /dev/null
+++ b/docs/en/administration/metadata/postgresql_best_practices.md
@@ -0,0 +1,51 @@
+---
+sidebar_label: PostgreSQL
+sidebar_position: 2
+---
+# PostgreSQL Best Practices
+
+For distributed file systems where data and metadata are stored separately, the read and write performance of metadata directly affects the efficiency of the whole system, and the security of metadata is also directly related to the data security of the whole system.
+
+In the production environment, it is recommended that you give priority to the hosted cloud database provided by the cloud computing platform with appropriate high availability architecture.
+
+Whether you build it yourself or use a cloud database, you should always pay attention to the integrity and security of metadata when using JuiceFS.
+
+## Communication Security
+
+By default, JuiceFS clients will use SSL encryption to connect to PostgreSQL. If SSL encryption is not enabled on the database, you need to append the `sslmode=disable` parameter to the metadata URL.
+
+It is recommended to configure and always enable SSL encryption on the database server side.
+
+## Passing sensitive information via environment variables
+
+Although it is easy and convenient to set the database password directly in the metadata URL, the password may be leaked in logs or program output, and for data security, the database password should always be passed through an environment variable.
+
+Environment variable names can be freely defined, e.g.
+
+```shell
+export $PG_PASSWD=mypassword
+```
+
+Passing the database password in the metadata URL via environment variables.
+
+```shell
+juicefs mount -d "postgres://user:$PG_PASSWD@192.168.1.6:5432/juicefs" /mnt/jfs
+```
+
+## Backup periodically
+
+Please refer to the official manual [Chapter 26. Backup and Restore](https://www.postgresql.org/docs/current/backup.html) to learn how to backup and restore the database.
+
+It is recommended to make a database backup plan and follow it periodically, and at the same time, try to restore the data in an experimental environment to confirm that the backup is valid.
+
+## Using connection pooling
+
+Connection pooling is an intermediate layer between the client and the database, which acts as an intermediary to improve connection efficiency and reduce the loss of short connections. Commonly used connection pools are [PgBouncer](https://www.pgbouncer.org/) and [Pgpool-II](https://www.pgpool.net/).
+
+## High Availability
+
+The official PostgreSQL document [High Availability, Load Balancing, and Replication](https://www.postgresql.org/docs/current/different-replication-solutions.html) compares several common database high availability solutions, please choose the appropriate according to your needs.
+
+:::note
+JuiceFS uses [transactions](https://www.postgresql.org/docs/current/tutorial-transactions.html) to ensure atomicity of metadata operations. Since PostgreSQL does not yet support Muti-Shard (Distributed) transactions, do not use a multi-server distributed architecture for the JuiceFS metadata.
+:::
diff --git a/docs/en/administration/metadata/redis_best_practices.md b/docs/en/administration/metadata/redis_best_practices.md
new file mode 100644
index 0000000..b7ca5dd
--- /dev/null
+++ b/docs/en/administration/metadata/redis_best_practices.md
@@ -0,0 +1,126 @@
+---
+sidebar_label: Redis
+sidebar_position: 1
+slug: /redis_best_practices
+---
+# Redis Best Practices
+
+This is a guide about Redis best practices. Redis is a critical component in JuiceFS architecture. It stores all the file system metadata and serve metadata operation from client. If Redis has any problem (either service unavailable or lose data), it will affect the user experience.
+
+**It's highly recommended use Redis service managed by public cloud provider if possible.** See ["Recommended Managed Redis Service"](#recommended-managed-redis-service) for more information. If you still need operate Redis by yourself in production environment, continue read following contents.
+
+## Memory usage
+
+The space used by the JuiceFS metadata engine is mainly related to the number of files in the file system. According to our experience, the metadata of each file occupies approximately 300 bytes of memory. Therefore, if you want to store 100 million files, approximately 30 GiB of memory is required.
+
+You can check the specific memory usage through Redis's [`INFO memory`](https://redis.io/commands/info) command, for example:
+
+```
+> INFO memory
+used_memory: 19167628056
+used_memory_human: 17.85G
+used_memory_rss: 20684886016
+used_memory_rss_human: 19.26G
+...
+used_memory_overhead: 5727954464
+...
+used_memory_dataset: 13439673592
+used_memory_dataset_perc: 70.12%
+```
+
+Among them, `used_memory_rss` is the total memory size actually used by Redis, which includes not only the size of data stored in Redis (that is, `used_memory_dataset` above), but also some Redis [system overhead](https://redis.io/commands/memory-stats) (that is, `used_memory_overhead` above). As mentioned earlier, the metadata of each file occupies about 300 bytes and is calculated by `used_memory_dataset`. If you find that the metadata of a single file in your JuiceFS file system occupies much more than 300 bytes, you can try to run [`juicefs gc`](../../reference/command_reference.md#juicefs-gc) command to clean up possible redundant data.
+
+---
+
+> **Note**: The following paragraphs are extracted from Redis official documentation. It may outdated, subject to latest version of the official documentation.
+
+## High Availability
+
+[Redis Sentinel](https://redis.io/topics/sentinel) is the official high availability solution for Redis. It provides following capabilities:
+
+- **Monitoring**. Sentinel constantly checks if your master and replica instances are working as expected.
+- **Notification**. Sentinel can notify the system administrator, or other computer programs, via an API, that something is wrong with one of the monitored Redis instances.
+- **Automatic failover**. If a master is not working as expected, Sentinel can start a failover process where a replica is promoted to master, the other additional replicas are reconfigured to use the new master, and the applications using the Redis server are informed about the new address to use when connecting.
+- **Configuration provider**. Sentinel acts as a source of authority for clients service discovery: clients connect to Sentinels in order to ask for the address of the current Redis master responsible for a given service. If a failover occurs, Sentinels will report the new address.
+
+**A stable release of Redis Sentinel is shipped since Redis 2.8**. Redis Sentinel version 1, shipped with Redis 2.6, is deprecated and should not be used.
+
+There're some [fundamental things](https://redis.io/topics/sentinel#fundamental-things-to-know-about-sentinel-before-deploying) you need to know about Redis Sentinel before using it:
+
+1. You need at least three Sentinel instances for a robust deployment.
+2. The three Sentinel instances should be placed into computers or virtual machines that are believed to fail in an independent way. So for example different physical servers or Virtual Machines executed on different availability zones.
+3. **Sentinel + Redis distributed system does not guarantee that acknowledged writes are retained during failures, since Redis uses asynchronous replication.** However there are ways to deploy Sentinel that make the window to lose writes limited to certain moments, while there are other less secure ways to deploy it.
+4. There is no HA setup which is safe if you don't test from time to time in development environments, or even better if you can, in production environments, if they work. You may have a misconfiguration that will become apparent only when it's too late (at 3am when your master stops working).
+5. **Sentinel, Docker, or other forms of Network Address Translation or Port Mapping should be mixed with care**: Docker performs port remapping, breaking Sentinel auto discovery of other Sentinel processes and the list of replicas for a master.
+
+Please read the [official documentation](https://redis.io/topics/sentinel) for more information.
+
+Once Redis servers and Sentinels are deployed, the `META-URL` can be specified as `redis[s]://[[USER]:PASSWORD@]MASTER_NAME,SENTINEL_ADDR[,SENTINEL_ADDR]:SENTINEL_PORT[/DB]`, for example:
+
+```bash
+$ ./juicefs mount redis://:password@masterName,1.2.3.4,1.2.5.6:26379/2 ~/jfs
+```
+
+> **Note**: For v0.16+, the `PASSWORD` in the URL will be used to connect Redis server, the password for Sentinel
+> should be provided using environment variable `SENTINEL_PASSWORD`. For early versions, the `PASSWORD` is used for both
+> Redis server and Sentinel, they can be overrode by environment variables `SENTINEL_PASSWORD` and `REDIS_PASSWORD`.
+
+## Data Durability
+
+Redis provides a different range of [persistence](https://redis.io/topics/persistence) options:
+
+- The RDB persistence performs point-in-time snapshots of your dataset at specified intervals.
+- The AOF persistence logs every write operation received by the server, that will be played again at server startup, reconstructing the original dataset. Commands are logged using the same format as the Redis protocol itself, in an append-only fashion. Redis is able to rewrite the log in the background when it gets too big.
+- It is possible to combine both AOF and RDB in the same instance. Notice that, in this case, when Redis restarts the AOF file will be used to reconstruct the original dataset since it is guaranteed to be the most complete.
+
+**It's recommended enable RDB and AOF simultaneously.** Beware that when use AOF you can have different fsync policies: no fsync at all, fsync every second, fsync at every query. With the default policy of fsync every second write performances are still great (fsync is performed using a background thread and the main thread will try hard to perform writes when no fsync is in progress.) **but you can only lose one second worth of writes**.
+
+**Remember backup is also required** (disk may break, VM may disappear). Redis is very data backup friendly since you can copy RDB files while the database is running: the RDB is never modified once produced, and while it gets produced it uses a temporary name and is renamed into its final destination atomically using `rename` only when the new snapshot is complete. You can also copy the AOF file in order to create backups.
+
+Please read the [official documentation](https://redis.io/topics/persistence) for more information.
+
+## Backing up Redis Data
+
+**Make Sure to Backup Your Database.** Disks break, instances in the cloud disappear, and so forth.
+
+By default Redis saves snapshots of the dataset on disk, in a binary file called `dump.rdb`. You can configure Redis to have it save the dataset every N seconds if there are at least M changes in the dataset, or you can manually call the [`SAVE`](https://redis.io/commands/save) or [`BGSAVE`](https://redis.io/commands/bgsave) commands.
+
+Redis is very data backup friendly since you can copy RDB files while the database is running: the RDB is never modified once produced, and while it gets produced it uses a temporary name and is renamed into its final destination atomically using `rename(2)` only when the new snapshot is complete.
+
+This means that copying the RDB file is completely safe while the server is running. This is what we suggest:
+
+- Create a cron job in your server creating hourly snapshots of the RDB file in one directory, and daily snapshots in a different directory.
+- Every time the cron script runs, make sure to call the `find` command to make sure too old snapshots are deleted: for instance you can take hourly snapshots for the latest 48 hours, and daily snapshots for one or two months. Make sure to name the snapshots with data and time information.
+- At least one time every day make sure to transfer an RDB snapshot _outside your data center_ or at least _outside the physical machine_ running your Redis instance.
+
+Please read the [official documentation](https://redis.io/topics/persistence) for more information.
+
+---
+
+## Recommended Managed Redis Service
+
+### Amazon ElastiCache for Redis
+
+[Amazon ElastiCache for Redis](https://aws.amazon.com/elasticache/redis) is a fully managed, Redis-compatible in-memory data store built for the cloud. It provides [automatic failover](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/AutoFailover.html), [automatic backup](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/backups-automatic.html) features to ensure availability and durability.
+
+> **Note**: Amazon ElastiCache for Redis has two type: cluster mode disabled and cluster mode enabled. Because JuiceFS uses [transaction](https://redis.io/topics/transactions) to guarantee the atomicity of metadata operations, so you couldn't use "cluster mode enabled" type.
+
+### Google Cloud Memorystore for Redis
+
+[Google Cloud Memorystore for Redis](https://cloud.google.com/memorystore/docs/redis) is a fully managed Redis service for the Google Cloud. Applications running on Google Cloud can achieve extreme performance by leveraging the highly scalable, available, secure Redis service without the burden of managing complex Redis deployments.
+
+### Azure Cache for Redis
+
+[Azure Cache for Redis](https://azure.microsoft.com/en-us/services/cache) is a fully managed, in-memory cache that enables high-performance and scalable architectures. Use it to create cloud or hybrid deployments that handle millions of requests per second at sub-millisecond latency—all with the configuration, security, and availability benefits of a managed service.
+
+### Alibaba Cloud ApsaraDB for Redis
+
+[Alibaba Cloud ApsaraDB for Redis](https://www.alibabacloud.com/product/apsaradb-for-redis) is a database service that is compatible with native Redis protocols. It supports a hybrid of memory and hard disks for data persistence. ApsaraDB for Redis provides a highly available hot standby architecture and can scale to meet requirements for high-performance and low-latency read/write operations.
+
+> **Note**: ApsaraDB for Redis supports 3 type [architectures](https://www.alibabacloud.com/help/doc-detail/86132.htm): standard, cluster and read/write splitting. Because JuiceFS uses [transaction](https://redis.io/topics/transactions) to guarantee the atomicity of metadata operations, so you couldn't use cluster type architecture.
+
+### Tencent Cloud TencentDB for Redis
+
+[Tencent Cloud TencentDB for Redis](https://intl.cloud.tencent.com/product/crs) is a caching and storage service compatible with the Redis protocol. It features a rich variety of data structure options to help you develop different types of business scenarios, and offers a complete set of database services such as primary-secondary hot backup, automatic switchover for disaster recovery, data backup, failover, instance monitoring, online scaling and data rollback.
+
+> **Note**: TencentDB for Redis supports 2 type [architectures](https://intl.cloud.tencent.com/document/product/239/3205): standard and cluster. Because JuiceFS uses [transaction](https://redis.io/topics/transactions) to guarantee the atomicity of metadata operations, so you couldn't use cluster type architecture.
diff --git a/docs/en/administration/metadata_dump_load.md b/docs/en/administration/metadata_dump_load.md
new file mode 100644
index 0000000..4c4af70
--- /dev/null
+++ b/docs/en/administration/metadata_dump_load.md
@@ -0,0 +1,123 @@
+---
+sidebar_label: Metadata Backup & Recovery
+sidebar_position: 4
+slug: /metadata_dump_load
+---
+# JuiceFS Metadata Backup & Recovery
+
+:::tip
+- JuiceFS v0.15.2 started to support manual backup, recovery and inter-engine migration of metadata.
+- JuiceFS v1.0.0 starts to support automatic metadata backup
+:::
+
+## Manual Backup
+
+JuiceFS supports [multiple metadata storage engines](../reference/how_to_setup_metadata_engine.md), and each engine has a different data management format internally. To facilitate management, JuiceFS provides `dump` command to allow writing all metadata in a uniform format to [JSON](https://www.json.org/json-en.html) file for backup. Also, JuiceFS provides `load` command to allow restoring or migrating backups to any metadata storage engine. For more information on the command, please refer to [here](../reference/command_reference.md#juicefs-dump).
+
+### Metadata Backup
+
+Metadata can be exported to a file using the `dump` command provided by the JuiceFS client, e.g.
+
+```bash
+juicefs dump redis://192.168.1.6:6379 meta.dump
+```
+
+By default, this command starts from the root directory `/` and iterates deeply through all the files in the directory tree, writing the metadata information of each file to the file in JSON format.
+
+:::note
+`juicefs dump` only guarantees the integrity of individual files themselves and does not provide a global point-in-time snapshot. If the business is still writing during the dump process, the final result will contain information from different points in time.
+:::
+
+Redis, MySQL and other databases have their own backup tools, such as [Redis RDB](https://redis.io/topics/persistence#backing-up-redis-data) and [mysqldump](https://dev.mysql.com/doc/mysql-backup-excerpt/5.7/en/mysqldump-sql-format.html), etc. Use them as JuiceFS metadata storage, you still need to backup metadata regularly with each database's own backup tool.
+
+The value of `juicefs dump` is that it can export complete metadata information in a uniform JSON format for easy management and preservation, and it can be recognized and imported by different metadata storage engines. In practice, the `dump` command should be used in conjunction with the backup tool that comes with the database to complement each other.
+
+:::note
+The above discussion is for metadata backup only. A complete file system backup solution should also include at least object storage data backup, such as offsite disaster recovery, recycle bin, multiple versions, etc.
+:::
+
+### Metadata Recovery
+
+:::tip
+JSON backups can only be restored to a `newly created database` or an `empty database`.
+:::
+
+Metadata from a backed up JSON file can be imported into a new **empty database** using the `load` command provided by the JuiceFS client, e.g.
+
+```bash
+juicefs load redis://192.168.1.6:6379 meta.dump
+```
+
+This command automatically handles conflicts due to the inclusion of files from different points in time, recalculates the file system statistics (space usage, inode counters, etc.), and finally generates a globally consistent metadata in the database. Alternatively, if you want to customize some of the metadata (be careful), you can try to manually modify the JSON file before loading.
+
+### Metadata Migration Between Engines
+
+:::tip
+The metadata migration operation requires the target database to be `newly created database` or `empty database`.
+:::
+
+Thanks to the commonality of the JSON format, which is recognized by all metadata storage engines supported by JuiceFS, it is possible to export metadata information from one engine as a JSON backup and then import it to another engine, thus enabling the migration of metadata between different types of engines. Example.
+
+```bash
+$ juicefs dump redis://192.168.1.6:6379 meta.dump
+$ juicefs load mysql://user:password@(192.168.1.6:3306)/juicefs meta.dump
+```
+
+It is also possible to migrate directly through the system's Pipe.
+
+```bash
+$ juicefs dump redis://192.168.1.6:6379 | juicefs load mysql://user:password@(192.168.1.6:3306)/juicefs
+```
+
+:::caution
+To ensure consistent file system content before and after migration, you need to stop business writes during the migration process. Also, since the original object storage is still used after migration, make sure the old engine is offline or has read-only access to the object storage only before the new metadata engine comes online, otherwise it may cause file system corruption.
+:::
+
+### Metadata Inspection
+
+In addition to exporting complete metadata information, the `dump` command also supports exporting metadata in specific subdirectories. The exported JSON content is often used to help troubleshoot problems because it gives the user a very visual view of the internal information of all the files under a given directory tree. For example.
+
+```bash
+$ juicefs dump redis://192.168.1.6:6379 meta.dump --subdir /path/in/juicefs
+```
+
+Moreover, you can use tools like `jq` to analyze the exported file.
+
+:::note
+Please don't dump a too big directory in online system as it may slow down the server.
+:::
+
+## Automatic Backup
+
+Starting with JuiceFS v1.0.0, the client automatically backups metadata and copies it to the object storage every hour, regardless of whether the file system is mounted via the `mount` command or accessed via the JuiceFS S3 gateway and Hadoop Java SDK.
+
+The backup files are stored in the `meta` directory of the object storage, which is a separate directory from the Data Store and is not visible in the mount point and does not interact with the Data Store, and can be viewed and managed using the File Browser of the object storage.
+
+![](../images/meta-auto-backup-list.png)
+
+By default, the JuiceFS client backs up metadata once an hour. The frequency of automatic backups can be adjusted with the `--backup-meta` option when mounting the filesystem, for example, to set the auto-backup to be performed every 8 hours.
+
+```shell
+$ sudo juicefs mount -d --backup-meta 8h redis://127.0.0.1:6379/1 /mnt
+```
+
+The backup frequency can be accurate to the second and the units supported are as follows.
+
+- `h`: accurate to the hour, e.g. `1h`.
+- `m`: accurate to the minute, e.g. `30m`, `1h30m`.
+- `s`: accurate to the second, such as `50s`, `30m50s`, `1h30m50s`;
+
+### Automatic Backup Policy
+
+Although automatic metadata backup becomes the default action for clients, backup conflicts do not occur when multiple hosts share the same filesystem mount.
+
+JuiceFS maintains a global timestamp to ensure that only one client performs the backup operation at the same time. When different backup periods are set between clients, then the backup is performed with the shortest period setting.
+
+### Backup Cleanup Policy
+
+JuiceFS periodically cleans up backups according to the following rules.
+
+- Keep all backups up to 2 days.
+- For more than 2 days and less than 2 weeks, keep 1 backup per day.
+- For more than 2 weeks and less than 2 months, keep 1 backup per week.
+- For more than 2 months, keep 1 backup for each month.
diff --git a/docs/en/administration/migration/_from_hdfs.md b/docs/en/administration/migration/_from_hdfs.md
new file mode 100644
index 0000000..fc22b31
--- /dev/null
+++ b/docs/en/administration/migration/_from_hdfs.md
@@ -0,0 +1,5 @@
+---
+sidebar_label: Migrate from HDFS
+sidebar_position: 3
+---
+# Migrate from HDFS
\ No newline at end of file
diff --git a/docs/en/administration/migration/_from_local.md b/docs/en/administration/migration/_from_local.md
new file mode 100644
index 0000000..eeb1e5c
--- /dev/null
+++ b/docs/en/administration/migration/_from_local.md
@@ -0,0 +1,5 @@
+---
+sidebar_label: Migrate from Local Disk
+sidebar_position: 1
+---
+# Migrate from Local Disk (or NAS)
\ No newline at end of file
diff --git a/docs/en/administration/migration/_from_s3.md b/docs/en/administration/migration/_from_s3.md
new file mode 100644
index 0000000..8e397ab
--- /dev/null
+++ b/docs/en/administration/migration/_from_s3.md
@@ -0,0 +1,5 @@
+---
+sidebar_label: Migrate from Object Storage
+sidebar_position: 2
+---
+# Migrate from Object Storage
\ No newline at end of file
diff --git a/docs/en/administration/monitoring.md b/docs/en/administration/monitoring.md
new file mode 100644
index 0000000..cfc0813
--- /dev/null
+++ b/docs/en/administration/monitoring.md
@@ -0,0 +1,214 @@
+---
+sidebar_label: Monitoring
+sidebar_position: 6
+---
+
+# Monitoring
+
+JuiceFS provides a [Prometheus](https://prometheus.io) API for each file system (the default API address is `http://localhost:9567/metrics`), which can be used to collect JuiceFS monitoring metrics. Once the monitoring metrics are collected, they can be quickly displayed via the [Grafana](https://grafana.com) dashboard template provided by JuiceFS.
+
+## Collecting monitoring metrics
+
+There are different ways to collect monitoring metrics depending on how JuiceFS is deployed, which are described below.
+
+### Mount point
+
+When the JuiceFS file system is mounted via the [`juicefs mount`](../reference/command_reference.md#juicefs-mount) command, you can collect monitoring metrics via the address `http://localhost:9567/metrics`, or you can customize it via the `--metrics` option. For example:
+
+```shell
+$ juicefs mount --metrics localhost:9567 ...
+```
+
+You can view these monitoring metrics using the command line tool:
+
+```shell
+$ curl http://localhost:9567/metrics
+```
+
+In addition, the root directory of each JuiceFS file system has a hidden file called `.stats`, through which you can also view monitoring metrics. For example (assuming here that the path to the mount point is `/jfs`):
+
+```shell
+$ cat /jfs/.stats
+```
+
+### Kubernetes
+
+The [JuiceFS CSI Driver](../deployment/how_to_use_on_kubernetes.md) will provide monitoring metrics on the `9567` port of the mount pod by default, or you can customize it by adding the `metrics` option to the `mountOptions` (please refer to the [CSI Driver documentation](https://juicefs.com/docs/csi/examples/mount-options) for how to modify `mountOptions`), e.g.:
+
+```yaml
+apiVersion: v1
+kind: PersistentVolume
+metadata:
+ name: juicefs-pv
+ labels:
+ juicefs-name: ten-pb-fs
+spec:
+ ...
+ mountOptions:
+ - metrics=0.0.0.0:9567
+```
+
+Add a crawl job to `prometheus.yml` to collect monitoring metrics:
+
+```yaml
+scrape_configs:
+ - job_name: 'juicefs'
+ kubernetes_sd_configs:
+ - role: pod
+ relabel_configs:
+ - source_labels: [__meta_kubernetes_pod_label_app_kubernetes_io_name]
+ action: keep
+ regex: juicefs-mount
+ - source_labels: [__address__]
+ action: replace
+ regex: ([^:]+)(:\d+)?
+ replacement: $1:9567
+ target_label: __address__
+ - source_labels: [__meta_kubernetes_pod_node_name]
+ target_label: node
+ action: replace
+```
+
+Here assume the Prometheus server is running inside Kubernetes cluster, if your Prometheus server is running outside Kubernetes cluster, make sure Kubernetes cluster nodes are reachable from Prometheus server, refer to [this issue](https://github.com/prometheus/prometheus/issues/4633) to add the `api_server` and `tls_config` client auth to the above configuration like this:
+
+```yaml
+scrape_configs:
+ - job_name: 'juicefs'
+ kubernetes_sd_configs:
+ - api_server:
+ role: pod
+ tls_config:
+ ca_file: <...>
+ cert_file: <...>
+ key_file: <...>
+ insecure_skip_verify: false
+ relabel_configs:
+ ...
+```
+
+### S3 Gateway
+
+:::note
+This feature needs to run JuiceFS client version 0.17.1 and above.
+:::
+
+The [JuiceFS S3 Gateway](../deployment/s3_gateway.md) will provide monitoring metrics at the address `http://localhost:9567/metrics` by default, or you can customize it with the `-metrics` option. For example:
+
+```shell
+$ juicefs gateway --metrics localhost:9567 ...
+```
+
+If you are deploying JuiceFS S3 Gateway in Kubernetes, you can refer to the Prometheus configuration in the [Kubernetes](#kubernetes) section to collect monitoring metrics (the difference is mainly in the regular expression for the label `__meta_kubernetes_pod_label_app_kubernetes_io_name`), e.g.:
+
+```yaml
+scrape_configs:
+ - job_name: 'juicefs-s3-gateway'
+ kubernetes_sd_configs:
+ - role: pod
+ relabel_configs:
+ - source_labels: [__meta_kubernetes_pod_label_app_kubernetes_io_name]
+ action: keep
+ regex: juicefs-s3-gateway
+ - source_labels: [__address__]
+ action: replace
+ regex: ([^:]+)(:\d+)?
+ replacement: $1:9567
+ target_label: __address__
+ - source_labels: [__meta_kubernetes_pod_node_name]
+ target_label: node
+ action: replace
+```
+
+#### Collected via Prometheus Operator
+
+[Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator) enables users to quickly deploy and manage Prometheus in Kubernetes, with the help of the `ServiceMonitor` CRD provided by Prometheus Operator can automatically generate scrape configuration. For example (assuming that the `Service` of the JuiceFS S3 Gateway is deployed in the `kube-system` namespace):
+
+```yaml
+apiVersion: monitoring.coreos.com/v1
+kind: ServiceMonitor
+metadata:
+ name: juicefs-s3-gateway
+spec:
+ namespaceSelector:
+ matchNames:
+ - kube-system
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: juicefs-s3-gateway
+ endpoints:
+ - port: metrics
+```
+
+For more information about Prometheus Operator, please check [official document](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/user-guides/getting-started.md).
+
+### Hadoop
+
+The [JuiceFS Hadoop Java SDK](../deployment/hadoop_java_sdk.md) supports reporting monitoring metrics to [Pushgateway](https://github.com/prometheus/pushgateway) and then letting Prometheus scrape the metrics from Pushgateway.
+
+Please enable metrics reporting with the following configuration:
+
+```xml
+
+ juicefs.push-gateway
+ host:port
+
+```
+
+At the same time, the frequency of reporting metrics can be modified through the `juicefs.push-interval` configuration. The default is to report once every 10 seconds. For all configurations supported by JuiceFS Hadoop Java SDK, please refer to [documentation](../deployment/hadoop_java_sdk.md#client-configurations).
+
+:::info
+According to the suggestion of [Pushgateway official document](https://github.com/prometheus/pushgateway/blob/master/README.md#configure-the-pushgateway-as-a-target-to-scrape), Prometheus's [scrape configuration](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config) needs to set `honor_labels: true`.
+
+It is important to note that the timestamp of the metrics scraped by Prometheus from Pushgateway is not the time when the JuiceFS Hadoop Java SDK reported it, but the time when it was scraped. For details, please refer to [Pushgateway official document](https://github.com/prometheus/pushgateway/blob/master/README.md#about-timestamps).
+
+By default, Pushgateway will only save metrics in memory. If you need to persist to disk, you can specify the file path for saving with the `--persistence.file` option and the frequency of saving to the file with the `--persistence.interval` option (the default save time is 5 minutes).
+:::
+
+:::note
+Each process using JuiceFS Hadoop Java SDK will have a unique metric, and Pushgateway will always remember all the collected metrics, resulting in the continuous accumulation of metrics and taking up too much memory, which will also slow down Prometheus scrapes metrics. It is recommended to clean up metrics on Pushgateway regularly.
+
+Regularly use the following command to clean up the metrics of Pushgateway. Clearing the metrics will not affect the running JuiceFS Hadoop Java SDK to continuously report data. **Note that the `--web.enable-admin-api` option must be specified when Pushgateway is started, and the following command will clear all monitoring metrics in Pushgateway.**
+
+```bash
+$ curl -X PUT http://host:9091/api/v1/admin/wipe
+```
+:::
+
+For more information about Pushgateway, please check [official document](https://github.com/prometheus/pushgateway/blob/master/README.md).
+
+### Use Consul as registration center
+
+:::note
+This feature needs to run JuiceFS client version 1.0.0 and above.
+:::
+
+JuiceFS support use Consul as registration center for metrics API. The default Consul address is `127.0.0.1:8500`. You could custom the address through `--consul` option, e.g.:
+
+```shell
+$ juicefs mount --consul 1.2.3.4:8500 ...
+```
+
+When the Consul address is configured, the `--metrics` option does not need to be configured. JuiceFS will automatically configure metrics URL according to its own network and port conditions. If `--metrics` is set at the same time, it will first try to listen on the configured metrics URL.
+
+For each instance registered to Consul, its `serviceName` is `juicefs`, and the format of `serviceId` is `:`, for example: `127.0.0.1:/tmp/jfs`.
+
+The meta of each instance contains two aspects: `hostname` and `mountpoint`. When `mountpoint` is `s3gateway`, which means that the instance is an S3 gateway.
+
+## Display monitoring metrics
+
+### Grafana dashboard template
+
+JuiceFS provides some dashboard templates for Grafana, which can be imported to show the collected metrics in Prometheus. The dashboard templates currently available are:
+
+| Name | Description |
+| ---- | ----------- |
+| [`grafana_template.json`](https://github.com/juicedata/juicefs/blob/main/docs/en/grafana_template.json) | For show metrics collected from mount point, S3 gateway (non-Kubernetes deployment) and Hadoop Java SDK |
+| [`grafana_template_k8s.json`](https://github.com/juicedata/juicefs/blob/main/docs/en/grafana_template_k8s.json) | For show metrics collected from Kubernetes CSI Driver and S3 gateway (Kubernetes deployment) |
+
+A sample Grafana dashboard looks like this:
+
+![JuiceFS Grafana dashboard](../images/grafana_dashboard.png)
+
+## Monitoring metrics reference
+
+Please refer to the ["JuiceFS Metrics"](../reference/p8s_metrics.md) document.
diff --git a/docs/en/administration/quota.md b/docs/en/administration/quota.md
new file mode 100644
index 0000000..14c3f64
--- /dev/null
+++ b/docs/en/administration/quota.md
@@ -0,0 +1,128 @@
+---
+sidebar_label: Storage Quota
+sidebar_position: 7
+---
+# JuiceFS Storage Quota
+
+JuiceFS v0.14.2 begins to support file system level storage quotas, a feature that includes:
+
+- Limit the total available capacity of the file system
+- Limit the total inodes of the file system
+
+:::tip
+The storage quota settings are stored in the metadata engine for all mount points to read, and the client of each mount point will also cache its own used capacity and inodes and synchronize them with the metadata engine once per second, while the client will read the latest usage value from the metadata engine every 10 seconds to synchronize the usage information among each mount point, but this information synchronization mechanism does not guarantee that the usage data will be counted accurately.
+:::
+
+## View file system information
+
+In a Linux environment, for example, the default capacity of a JuiceFS type file system is identified as `1.0P` using the `df` command that comes with the system.
+
+```shell
+$ df -Th | grep juicefs
+JuiceFS:ujfs fuse.juicefs 1.0P 682M 1.0P 1% /mnt
+```
+
+:::note
+JuiceFS implements support for the POSIX interface through FUSE, because the underlying object storage is usually of infinitely scalable capacity, so the marked capacity is only a valuation (which also means unlimited) and not the actual capacity, which changes dynamically with the actual usage.
+:::
+
+The `config` command that comes with the client allows you to view the details of a filesystem.
+
+```shell
+$ juicefs config $METAURL
+{
+ "Name": "ujfs",
+ "UUID": "1aa6d290-279b-432f-b9b5-9d7fd597dec2",
+ "Storage": "minio",
+ "Bucket": "127.0.0.1:9000/jfs1",
+ "AccessKey": "herald",
+ "SecretKey": "removed",
+ "BlockSize": 4096,
+ "Compression": "none",
+ "Shards": 0,
+ "Partitions": 0,
+ "Capacity": 0,
+ "Inodes": 0,
+ "TrashDays": 0
+}
+```
+
+## Limit total capacity
+
+The capacity limit in GiB can be set with `--capacity` when creating a file system, e.g. to create a file system with an available capacity of 100 GiB:
+
+```shell
+$ juicefs format --storage minio \
+--bucket 127.0.0.1:9000/jfs1 \
+...
+--capacity 100 \
+$METAURL myjfs
+```
+
+You can also set a capacity limit for a created filesystem with the `config` command:
+
+```shell
+$ juicefs config $METAURL --capacity 100
+2022/01/27 12:31:39.506322 juicefs[16259] : Meta address: postgres://herald@127.0.0.1:5432/jfs1
+2022/01/27 12:31:39.521232 juicefs[16259] : The latency to database is too high: 14.771783ms
+ capacity: 0 GiB -> 100 GiB
+```
+
+For file systems with storage quota set, the identification capacity becomes the quota capacity:
+
+```shell
+$ df -Th | grep juicefs
+JuiceFS:ujfs fuse.juicefs 100G 682M 100G 1% /mnt
+```
+
+## Limit the total number of inodes
+
+On Linux systems, each file (a folder is also a type of file) has an inode regardless of size, so limiting the number of inodes is equivalent to limiting the number of files.
+
+The quota can be set with `--inodes` when creating the filesystem, e.g.
+
+```shell
+$ juicefs format --storage minio \
+--bucket 127.0.0.1:9000/jfs1 \
+...
+--inodes 100 \
+$METAURL myjfs
+```
+
+The file system created by the above command allows only 100 files to be stored, but there is no limit to the size of individual files, for example, a single file of 1TB or even larger is fine, as long as the total number of files does not exceed 100.
+
+You can also set a capacity quota for a created filesystem by using the `config` command:
+
+```shell
+$ juicefs config $METAURL --inodes 100
+2022/01/27 12:35:37.311465 juicefs[16407] : Meta address: postgres://herald@127.0.0.1:5432/jfs1
+2022/01/27 12:35:37.322991 juicefs[16407] : The latency to database is too high: 11.413961ms
+ inodes: 0 -> 100
+```
+
+## Put together
+
+You can combine `--capacity` and `--inodes` to set the capacity quota of a filesystem more flexibly, for example, to create a filesystem that limits the total capacity to 100TiB and allows only 100000 files to be stored:
+
+```shell
+$ juicefs format --storage minio \
+--bucket 127.0.0.1:9000/jfs1 \
+...
+--capacity 102400 \
+--inodes 100000 \
+$METAURL myjfs
+```
+
+Similarly, for the created file systems, the following settings can be made separately.
+
+```shell
+juicefs config $METAURL --capacity 102400
+```
+
+```shell
+juicefs config $METAURL --inodes 100000
+```
+
+:::tip
+The client reads the latest storage quota settings from the metadata engine every 60 seconds to update the local settings, and this time frequency may cause other mount points to take up to 60 seconds to complete the quota setting update.
+:::
diff --git a/docs/en/administration/status_check_and_maintenance.md b/docs/en/administration/status_check_and_maintenance.md
new file mode 100644
index 0000000..ca4c4f3
--- /dev/null
+++ b/docs/en/administration/status_check_and_maintenance.md
@@ -0,0 +1,8 @@
+---
+sidebar_label: Status Check & Maintenance
+sidebar_position: 8
+---
+# Status Check & Maintenance
+
+:::note
+Working in progress.
\ No newline at end of file
diff --git a/docs/en/administration/sync_accounts_between_multiple_hosts.md b/docs/en/administration/sync_accounts_between_multiple_hosts.md
new file mode 100644
index 0000000..964cda2
--- /dev/null
+++ b/docs/en/administration/sync_accounts_between_multiple_hosts.md
@@ -0,0 +1,134 @@
+---
+sidebar_label: Sync Accounts between Multiple Hosts
+sidebar_position: 10
+slug: /sync_accounts_between_multiple_hosts
+---
+
+# Sync Accounts between Multiple Hosts
+
+JuiceFS supports POSIX compatible ACL to manage permissions in the granularity of directory or file. The behavior is the same as a local file system.
+
+To provide users with an intuitive and consistent permission management experience (e.g. the files accessible by user A in host X should be accessible in host Y with the same user), the same user who want to access JuiceFS should have the same UID and GID on all hosts.
+
+Here we provide a simple [Ansible](https://www.ansible.com/community) playbook to demonstrate how to ensure an account with same UID and GID on multiple hosts.
+
+:::note
+If you are using JuiceFS in Hadoop environment, besides sync accounts between multiple hosts, you can also specify a global user list and user group file, please refer to [here](../deployment/hadoop_java_sdk.md#other-configurations) for more information.
+:::
+
+## Install Ansible
+
+Select a host as a [control node](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html#managed-node-requirements) which can access all hosts using `ssh` with the same privileged account like `root` or other sudo account. Install Ansible on this host. Read [Installing Ansible](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html#installing-ansible) for more installation details.
+
+
+
+## Ensure the same account on all hosts
+
+Create an empty directory `account-sync` , save below content in `play.yaml` under this directory.
+
+```yaml
+---
+- hosts: all
+ tasks:
+ - name: "Ensure group {{ group }} with gid {{ gid }} exists"
+ group:
+ name: "{{ group }}"
+ gid: "{{ gid }}"
+ state: present
+
+ - name: "Ensure user {{ user }} with uid {{ uid }} exists"
+ user:
+ name: "{{ user }}"
+ uid: "{{ uid }}"
+ group: "{{ gid }}"
+ state: present
+```
+
+
+
+Create a file named `hosts` in this directory, place IP addresses of all hosts need to create account in this file, each line with a host's IP.
+
+Here we ensure an account `alice` with UID 1200 and group `staff` with GID 500 on 2 hosts:
+
+```shell
+~/account-sync$ cat hosts
+172.16.255.163
+172.16.255.180
+~/account-sync$ ansible-playbook -i hosts -u root --ssh-extra-args "-o StrictHostKeyChecking=no" \
+--extra-vars "group=staff gid=500 user=alice uid=1200" play.yaml
+
+PLAY [all] ************************************************************************************************
+
+TASK [Gathering Facts] ************************************************************************************
+ok: [172.16.255.180]
+ok: [172.16.255.163]
+
+TASK [Ensure group staff with gid 500 exists] *************************************************************
+ok: [172.16.255.163]
+ok: [172.16.255.180]
+
+TASK [Ensure user alice with uid 1200 exists] *************************************************************
+changed: [172.16.255.180]
+changed: [172.16.255.163]
+
+PLAY RECAP ************************************************************************************************
+172.16.255.163 : ok=3 changed=1 unreachable=0 failed=0
+172.16.255.180 : ok=3 changed=1 unreachable=0 failed=0
+```
+
+Now the new account `alice:staff` has been created on these 2 hosts.
+
+If the UID or GID specified has been allocated to another user or group on some hosts, the creation would failed.
+
+```shell
+~/account-sync$ ansible-playbook -i hosts -u root --ssh-extra-args "-o StrictHostKeyChecking=no" \
+--extra-vars "group=ubuntu gid=1000 user=ubuntu uid=1000" play.yaml
+
+PLAY [all] ************************************************************************************************
+
+TASK [Gathering Facts] ************************************************************************************
+ok: [172.16.255.180]
+ok: [172.16.255.163]
+
+TASK [Ensure group ubuntu with gid 1000 exists] ***********************************************************
+ok: [172.16.255.163]
+fatal: [172.16.255.180]: FAILED! => {"changed": false, "msg": "groupmod: GID '1000' already exists\n", "name": "ubuntu"}
+
+TASK [Ensure user ubuntu with uid 1000 exists] ************************************************************
+ok: [172.16.255.163]
+ to retry, use: --limit @/home/ubuntu/account-sync/play.retry
+
+PLAY RECAP ************************************************************************************************
+172.16.255.163 : ok=3 changed=0 unreachable=0 failed=0
+172.16.255.180 : ok=1 changed=0 unreachable=0 failed=1
+```
+
+In above example, the group ID 1000 has been allocated to another group on host `172.16.255.180` , we should **change the GID** or **delete the group with GID 1000** on host `172.16.255.180` , then run the playbook again.
+
+:::caution
+If the user account has already existed on the host and we change it to another UID or GID value, the user may loss permissions to the files and directories which they previously have. For example:
+
+```shell
+$ ls -l /tmp/hello.txt
+-rw-r--r-- 1 alice staff 6 Apr 26 21:43 /tmp/hello.txt
+$ id alice
+uid=1200(alice) gid=500(staff) groups=500(staff)
+```
+
+We change the UID of alice from 1200 to 1201
+
+```shell
+~/account-sync$ ansible-playbook -i hosts -u root --ssh-extra-args "-o StrictHostKeyChecking=no" \
+--extra-vars "group=staff gid=500 user=alice uid=1201" play.yaml
+```
+
+Now we have no permission to remove this file as its owner is not alice:
+
+```shell
+$ ls -l /tmp/hello.txt
+-rw-r--r-- 1 1200 staff 6 Apr 26 21:43 /tmp/hello.txt
+$ rm /tmp/hello.txt
+rm: remove write-protected regular file '/tmp/hello.txt'? y
+rm: cannot remove '/tmp/hello.txt': Operation not permitted
+```
+:::
diff --git a/docs/en/benchmark/_performance_tuning.md b/docs/en/benchmark/_performance_tuning.md
new file mode 100644
index 0000000..1411c6c
--- /dev/null
+++ b/docs/en/benchmark/_performance_tuning.md
@@ -0,0 +1,5 @@
+---
+sidebar_label: Performance Tuning
+sidebar_position: 5
+---
+# Performance Tuning
\ No newline at end of file
diff --git a/docs/en/benchmark/benchmark.md b/docs/en/benchmark/benchmark.md
new file mode 100644
index 0000000..b41f05c
--- /dev/null
+++ b/docs/en/benchmark/benchmark.md
@@ -0,0 +1,41 @@
+---
+sidebar_label: Performance Benchmark
+sidebar_position: 1
+slug: .
+---
+# Performance Benchmark
+
+## Basic benchmark
+
+JuiceFS provides a subcommand to run a few basic benchmarks to understand how it works in your environment:
+
+![JuiceFS Bench](../images/juicefs-bench.png)
+
+## Throughput
+
+Performed a sequential read/write benchmark on JuiceFS, [EFS](https://aws.amazon.com/efs) and [S3FS](https://github.com/s3fs-fuse/s3fs-fuse) by [fio](https://github.com/axboe/fio), here is the result:
+
+[![Sequential Read Write Benchmark](../images/sequential-read-write-benchmark.svg)](../images/sequential-read-write-benchmark.svg)
+
+It shows JuiceFS can provide 10X more throughput than the other two, read [more details](fio.md).
+
+## Metadata IOPS
+
+Performed a simple mdtest benchmark on JuiceFS, [EFS](https://aws.amazon.com/efs) and [S3FS](https://github.com/s3fs-fuse/s3fs-fuse) by [mdtest](https://github.com/hpc/ior), here is the result:
+
+[![Metadata Benchmark](../images/metadata-benchmark.svg)](../images/metadata-benchmark.svg)
+
+It shows JuiceFS can provide significantly more metadata IOPS than the other two, read [more details](mdtest.md).
+
+## Analyze performance
+
+There is a virtual file called `.accesslog` in the root of JuiceFS to show all the operations and the time they takes, for example:
+
+```
+$ cat /jfs/.accesslog
+2021.01.15 08:26:11.003330 [uid:0,gid:0,pid:4403] write (17669,8666,4993160): OK <0.000010>
+2021.01.15 08:26:11.003473 [uid:0,gid:0,pid:4403] write (17675,198,997439): OK <0.000014>
+2021.01.15 08:26:11.003616 [uid:0,gid:0,pid:4403] write (17666,390,951582): OK <0.000006>
+```
+
+The last number on each line is the time (in seconds) current operation takes. You can use this directly to debug and analyze performance issues, or try `./juicefs profile /jfs` to monitor real time statistics. Please run `./juicefs profile -h` or refer to [here](../benchmark/operations_profiling.md) to learn more about this subcommand.
diff --git a/docs/en/benchmark/fio.md b/docs/en/benchmark/fio.md
new file mode 100644
index 0000000..456030c
--- /dev/null
+++ b/docs/en/benchmark/fio.md
@@ -0,0 +1,73 @@
+---
+sidebar_label: Benchmark with fio
+sidebar_position: 7
+slug: /fio
+---
+# Benchmark with fio
+
+## Testing Approach
+
+Performed a sequential read/write benchmark on JuiceFS, [EFS](https://aws.amazon.com/efs) and [S3FS](https://github.com/s3fs-fuse/s3fs-fuse) by [fio](https://github.com/axboe/fio).
+
+## Testing Tool
+
+The following tests were performed by fio 3.1.
+
+Sequential read test (numjobs: 1):
+
+```
+fio --name=sequential-read --directory=/s3fs --rw=read --refill_buffers --bs=4M --size=4G
+fio --name=sequential-read --directory=/efs --rw=read --refill_buffers --bs=4M --size=4G
+fio --name=sequential-read --directory=/jfs --rw=read --refill_buffers --bs=4M --size=4G
+```
+
+Sequential write test (numjobs: 1):
+
+```
+fio --name=sequential-write --directory=/s3fs --rw=write --refill_buffers --bs=4M --size=4G --end_fsync=1
+fio --name=sequential-write --directory=/efs --rw=write --refill_buffers --bs=4M --size=4G --end_fsync=1
+fio --name=sequential-write --directory=/jfs --rw=write --refill_buffers --bs=4M --size=4G --end_fsync=1
+```
+
+Sequential read test (numjobs: 16):
+
+```
+fio --name=big-file-multi-read --directory=/s3fs --rw=read --refill_buffers --bs=4M --size=4G --numjobs=16
+fio --name=big-file-multi-read --directory=/efs --rw=read --refill_buffers --bs=4M --size=4G --numjobs=16
+fio --name=big-file-multi-read --directory=/jfs --rw=read --refill_buffers --bs=4M --size=4G --numjobs=16
+```
+
+Sequential write test (numjobs: 16):
+
+```
+fio --name=big-file-multi-write --directory=/s3fs --rw=write --refill_buffers --bs=4M --size=4G --numjobs=16 --end_fsync=1
+fio --name=big-file-multi-write --directory=/efs --rw=write --refill_buffers --bs=4M --size=4G --numjobs=16 --end_fsync=1
+fio --name=big-file-multi-write --directory=/jfs --rw=write --refill_buffers --bs=4M --size=4G --numjobs=16 --end_fsync=1
+```
+
+## Testing Environment
+
+In the following test results, all fio tests based on the c5d.18xlarge EC2 instance (72 CPU, 144G RAM), Ubuntu 18.04 LTS (Kernel 5.4.0) system, JuiceFS use the local Redis instance (version 4.0.9) to store metadata.
+
+JuiceFS mount command:
+
+```
+./juicefs format --storage=s3 --bucket=https://.s3..amazonaws.com localhost benchmark
+./juicefs mount --max-uploads=150 --io-retries=20 localhost /jfs
+```
+
+EFS mount command (the same as the configuration page):
+
+```
+mount -t nfs -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport, .efs..amazonaws.com:/ /efs
+```
+
+S3FS (version 1.82) mount command:
+
+```
+s3fs :/s3fs /s3fs -o host=https://s3..amazonaws.com,endpoint=,passwd_file=${HOME}/.passwd-s3fs
+```
+
+## Testing Result
+
+![Sequential Read Write Benchmark](../images/sequential-read-write-benchmark.svg)
diff --git a/docs/en/benchmark/mdtest.md b/docs/en/benchmark/mdtest.md
new file mode 100644
index 0000000..abcd435
--- /dev/null
+++ b/docs/en/benchmark/mdtest.md
@@ -0,0 +1,122 @@
+---
+sidebar_label: Benchmark with mdtest
+sidebar_position: 8
+slug: /mdtest
+---
+# Benchmark with mdtest
+
+## Testing Approach
+
+Performed a metadata test on JuiceFS, [EFS](https://aws.amazon.com/efs) and [S3FS](https://github.com/s3fs-fuse/s3fs-fuse) by [mdtest](https://github.com/hpc/ior).
+
+## Testing Tool
+
+The following tests were performed by mdtest 3.4.
+Arguments of mdtest are adjusted to ensure the command can be finished in 5 minutes.
+
+```
+./mdtest -d /s3fs/mdtest -b 6 -I 8 -z 2
+./mdtest -d /efs/mdtest -b 6 -I 8 -z 4
+./mdtest -d /jfs/mdtest -b 6 -I 8 -z 4
+```
+
+## Testing Environment
+
+In the following test results, all mdtest tests based on the c5.large EC2 instance (2 CPU, 4G RAM), Ubuntu 18.04 LTS (Kernel 5.4.0) system, JuiceFS use Redis (version 4.0.9) running on a c5.large EC2 instance in the same available zone to store metadata.
+
+JuiceFS mount command:
+
+```
+./juicefs format --storage=s3 --bucket=https://.s3..amazonaws.com localhost benchmark
+nohup ./juicefs mount localhost /jfs &
+```
+
+EFS mount command (the same as the configuration page):
+
+```
+mount -t nfs -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport, .efs..amazonaws.com:/ /efs
+```
+
+S3FS (version 1.82) mount command:
+
+```
+s3fs :/s3fs /s3fs -o host=https://s3..amazonaws.com,endpoint=,passwd_file=${HOME}/.passwd-s3fs
+```
+
+## Testing Result
+
+![Metadata Benchmark](../images/metadata-benchmark.svg)
+
+### S3FS
+```
+mdtest-3.4.0+dev was launched with 1 total task(s) on 1 node(s)
+Command line used: ./mdtest '-d' '/s3fs/mdtest' '-b' '6' '-I' '8' '-z' '2'
+WARNING: Read bytes is 0, thus, a read test will actually just open/close.
+Path : /s3fs/mdtest
+FS : 256.0 TiB Used FS: 0.0% Inodes: 0.0 Mi Used Inodes: -nan%
+Nodemap: 1
+1 tasks, 344 files/directories
+
+SUMMARY rate: (of 1 iterations)
+ Operation Max Min Mean Std Dev
+ --------- --- --- ---- -------
+ Directory creation : 5.977 5.977 5.977 0.000
+ Directory stat : 435.898 435.898 435.898 0.000
+ Directory removal : 8.969 8.969 8.969 0.000
+ File creation : 5.696 5.696 5.696 0.000
+ File stat : 68.692 68.692 68.692 0.000
+ File read : 33.931 33.931 33.931 0.000
+ File removal : 23.658 23.658 23.658 0.000
+ Tree creation : 5.951 5.951 5.951 0.000
+ Tree removal : 9.889 9.889 9.889 0.000
+```
+
+### EFS
+
+```
+mdtest-3.4.0+dev was launched with 1 total task(s) on 1 node(s)
+Command line used: ./mdtest '-d' '/efs/mdtest' '-b' '6' '-I' '8' '-z' '4'
+WARNING: Read bytes is 0, thus, a read test will actually just open/close.
+Path : /efs/mdtest
+FS : 8388608.0 TiB Used FS: 0.0% Inodes: 0.0 Mi Used Inodes: -nan%
+Nodemap: 1
+1 tasks, 12440 files/directories
+
+SUMMARY rate: (of 1 iterations)
+ Operation Max Min Mean Std Dev
+ --------- --- --- ---- -------
+ Directory creation : 192.301 192.301 192.301 0.000
+ Directory stat : 1311.166 1311.166 1311.166 0.000
+ Directory removal : 213.132 213.132 213.132 0.000
+ File creation : 179.293 179.293 179.293 0.000
+ File stat : 915.230 915.230 915.230 0.000
+ File read : 371.012 371.012 371.012 0.000
+ File removal : 217.498 217.498 217.498 0.000
+ Tree creation : 187.906 187.906 187.906 0.000
+ Tree removal : 218.357 218.357 218.357 0.000
+```
+
+### JuiceFS
+
+```
+mdtest-3.4.0+dev was launched with 1 total task(s) on 1 node(s)
+Command line used: ./mdtest '-d' '/jfs/mdtest' '-b' '6' '-I' '8' '-z' '4'
+WARNING: Read bytes is 0, thus, a read test will actually just open/close.
+Path : /jfs/mdtest
+FS : 1024.0 TiB Used FS: 0.0% Inodes: 10.0 Mi Used Inodes: 0.0%
+Nodemap: 1
+1 tasks, 12440 files/directories
+
+SUMMARY rate: (of 1 iterations)
+ Operation Max Min Mean Std Dev
+ --------- --- --- ---- -------
+ Directory creation : 1416.582 1416.582 1416.582 0.000
+ Directory stat : 3810.083 3810.083 3810.083 0.000
+ Directory removal : 1115.108 1115.108 1115.108 0.000
+ File creation : 1410.288 1410.288 1410.288 0.000
+ File stat : 5023.227 5023.227 5023.227 0.000
+ File read : 3487.947 3487.947 3487.947 0.000
+ File removal : 1163.371 1163.371 1163.371 0.000
+ Tree creation : 1503.004 1503.004 1503.004 0.000
+ Tree removal : 1119.806 1119.806 1119.806 0.000
+```
diff --git a/docs/en/benchmark/metadata_engines_benchmark.md b/docs/en/benchmark/metadata_engines_benchmark.md
new file mode 100644
index 0000000..2253816
--- /dev/null
+++ b/docs/en/benchmark/metadata_engines_benchmark.md
@@ -0,0 +1,188 @@
+---
+sidebar_label: Metadata Engines Benchmark
+sidebar_position: 6
+slug: /metadata_engines_benchmark
+---
+# Metadata Engines Benchmark
+
+Conclusion first:
+
+- For pure metadata operations, MySQL costs about 2 ~ 4x times of Redis; TiKV has similar performance to MySQL, and in most cases it costs a bit less
+- For small I/O (~100 KiB) workloads, total time costs with MySQL are about 1 ~ 3x of those with Redis; TiKV performs similarly to MySQL
+- For large I/O (~4 MiB) workloads, total time costs with different metadata engines show no obvious difference (object storage becomes the bottleneck)
+
+>**Note**:
+>
+>1. By changing `appendfsync` from `always` to `everysec`, Redis gains performance boost but loses a bit of data reliability; more information can be found [here](https://redis.io/topics/persistence)
+>2. Both Redis and MySQL store only one replica locally, while TiKV stores three replicas in three different hosts using Raft protocol
+
+
+Details are provided below. Please note all the tests are run with the same object storage (to save data), client and metadata hosts; only metadata engines differ.
+
+## Environment
+
+### JuiceFS Version
+
+juicefs version 0.16-dev (2021-07-20 9efa870)
+
+### Object Storage
+
+Amazon S3
+
+### Client Hosts
+
+- Amazon c5.xlarge: 4 vCPUs, 8 GiB Memory, Up to 10 Gigabit Network
+- Ubuntu 18.04.4 LTS
+
+### Meta Hosts
+
+- Amazon c5d.xlarge: 4 vCPUs, 8 GiB Memory, Up to 10 Gigabit Network, 100 GB SSD (local storage for metadata engines)
+- Ubuntu 18.04.4 LTS
+- SSD is formated as ext4 and mounted on `/data`
+
+### Meta Engines
+
+#### Redis
+
+- Version: [6.2.3](https://download.redis.io/releases/redis-6.2.3.tar.gz)
+- Configuration:
+ - appendonly: yes
+ - appendfsync: always or everysec
+ - dir: `/data/redis`
+
+#### MySQL
+
+- Version: 8.0.25
+- `/var/lib/mysql` is bind mounted on `/data/mysql`
+
+### TiKV
+
+- Version: 5.1.0
+- Configuration:
+ - deploy_dir: `/data/tikv-deploy`
+ - data_dir: `/data/tikv-data`
+
+## Tools
+
+All the following tests are run for each metadata engine.
+
+### Golang Benchmark
+
+Simple benchmarks within the source code: `pkg/meta/benchmarks_test.go`.
+
+### JuiceFS Bench
+
+JuiceFS provides a basic benchmark command:
+
+```bash
+$ ./juicefs bench /mnt/jfs
+```
+
+### mdtest
+
+- Version: mdtest-3.4.0+dev
+
+Run parallel tests on 3 client nodes:
+
+```bash
+$ cat myhost
+client1 slots=4
+client2 slots=4
+client3 slots=4
+```
+
+Test commands:
+
+```bash
+# metadata only
+$ mpirun --use-hwthread-cpus --allow-run-as-root -np 12 --hostfile myhost --map-by slot /root/mdtest -b 3 -z 1 -I 100 -u -d /mnt/jfs
+
+# 12000 * 100KiB files
+$ mpirun --use-hwthread-cpus --allow-run-as-root -np 12 --hostfile myhost --map-by slot /root/mdtest -F -w 102400 -I 1000 -z 0 -u -d /mnt/jfs
+```
+
+### fio
+
+- Version: fio-3.1
+
+```bash
+fio --name=big-write --directory=/mnt/jfs --rw=write --refill_buffers --bs=4M --size=4G --numjobs=4 --end_fsync=1 --group_reporting
+```
+
+## Results
+
+### Golang Benchmark
+
+- Shows time cost (us/op), smaller is better
+- Number in parentheses is the multiple of Redis-Always cost (`always` and `everysec` are candidates for Redis configuration `appendfsync`)
+- Because of metadata cache, the results of `Read` are all less than 1us, which are not comparable for now
+
+| | Redis-Always | Redis-Everysec | MySQL | TiKV |
+| ------------ | ------------ | -------------- | ----- | ---- |
+| mkdir | 986 | 700 (0.7) | 2274 (2.3) | 1961 (2.0) |
+| mvdir | 1116 | 940 (0.8) | 3690 (3.3) | 2145 (1.9) |
+| rmdir | 981 | 817 (0.8) | 2980 (3.0) | 2300 (2.3) |
+| readdir_10 | 376 | 378 (1.0) | 1365 (3.6) | 965 (2.6) |
+| readdir_1k | 1804 | 1819 (1.0) | 15449 (8.6) | 6776 (3.8) |
+| mknod | 968 | 665 (0.7) | 2325 (2.4) | 1997 (2.1) |
+| create | 957 | 703 (0.7) | 2291 (2.4) | 1971 (2.1) |
+| rename | 1082 | 1040 (1.0) | 3701 (3.4) | 2162 (2.0) |
+| unlink | 842 | 710 (0.8) | 3293 (3.9) | 2217 (2.6) |
+| lookup | 118 | 127 (1.1) | 409 (3.5) | 571 (4.8) |
+| getattr | 108 | 120 (1.1) | 358 (3.3) | 285 (2.6) |
+| setattr | 568 | 490 (0.9) | 1239 (2.2) | 1720 (3.0) |
+| access | 109 | 116 (1.1) | 354 (3.2) | 283 (2.6) |
+| setxattr | 237 | 113 (0.5) | 1197 (5.1) | 1508 (6.4) |
+| getxattr | 110 | 108 (1.0) | 326 (3.0) | 279 (2.5) |
+| removexattr | 244 | 116 (0.5) | 847 (3.5) | 1856 (7.6) |
+| listxattr_1 | 111 | 106 (1.0) | 336 (3.0) | 286 (2.6) |
+| listxattr_10 | 112 | 111 (1.0) | 376 (3.4) | 303 (2.7) |
+| link | 715 | 574 (0.8) | 2610 (3.7) | 1949 (2.7) |
+| symlink | 952 | 702 (0.7) | 2583 (2.7) | 1960 (2.1) |
+| newchunk | 235 | 113 (0.5) | 1 (0.0) | 1 (0.0) |
+| write | 816 | 564 (0.7) | 2788 (3.4) | 2138 (2.6) |
+| read_1 | 0 | 0 (0.0) | 0 (0.0) | 0 (0.0) |
+| read_10 | 0 | 0 (0.0) | 0 (0.0) | 0 (0.0) |
+
+### JuiceFS Bench
+
+| | Redis-Always | Redis-Everysec | MySQL | TiKV |
+| -------------- | -------------- | -------------- | -------------- | -------------- |
+| Write big | 312.81 MiB/s | 303.45 MiB/s | 310.26 MiB/s | 310.90 MiB/s |
+| Read big | 348.06 MiB/s | 525.78 MiB/s | 493.45 MiB/s | 477.78 MiB/s |
+| Write small | 26.0 files/s | 27.5 files/s | 22.7 files/s | 24.2 files/s |
+| Read small | 1431.6 files/s | 1113.4 files/s | 608.0 files/s | 415.7 files/s |
+| Stat file | 6713.7 files/s | 6885.8 files/s | 2144.9 files/s | 1164.5 files/s |
+| FUSE operation | 0.45 ms | 0.32 ms | 0.41 ms | 0.40 ms |
+| Update meta | 1.04 ms | 0.79 ms | 3.36 ms | 1.74 ms |
+
+### mdtest
+
+- Shows rate (ops/sec), bigger is better
+
+| | Redis-Always | Redis-Everysec | MySQL | TiKV |
+| ------------------ | ------------ | -------------- | --------- | --------- |
+| **EMPTY FILES** | | | | |
+| Directory creation | 4149.645 | 9261.190 | 1603.298 | 2023.177 |
+| Directory stat | 172665.701 | 243307.527 | 15678.643 | 15029.717 |
+| Directory removal | 4687.027 | 9575.706 | 1420.124 | 1772.861 |
+| File creation | 4257.367 | 8994.232 | 1632.225 | 2119.616 |
+| File stat | 158793.214 | 287425.368 | 15598.031 | 14466.477 |
+| File read | 38872.116 | 47938.792 | 14004.083 | 17149.941 |
+| File removal | 3831.421 | 10538.675 | 983.338 | 1497.474 |
+| Tree creation | 100.403 | 108.657 | 44.154 | 15.615 |
+| Tree removal | 127.257 | 143.625 | 51.804 | 21.005 |
+| **SMALL FILES** | | | | |
+| File creation | 317.999 | 317.925 | 272.272 | 280.493 |
+| File stat | 54063.617 | 57798.963 | 13882.940 | 10984.141 |
+| File read | 56891.010 | 57548.889 | 16038.716 | 7155.426 |
+| File removal | 3638.809 | 8490.490 | 837.510 | 1184.253 |
+| Tree creation | 54.523 | 119.317 | 23.336 | 5.233 |
+| Tree removal | 73.366 | 82.195 | 22.783 | 4.918 |
+
+### fio
+
+| | Redis-Always | Redis-Everysec | MySQL | TiKV |
+| --------------- | ------------ | -------------- | --------- | --------- |
+| Write bandwidth | 350 MiB/s | 360 MiB/s | 360 MiB/s | 358 MiB/s |
+
diff --git a/docs/en/benchmark/operations_profiling.md b/docs/en/benchmark/operations_profiling.md
new file mode 100644
index 0000000..964cddb
--- /dev/null
+++ b/docs/en/benchmark/operations_profiling.md
@@ -0,0 +1,56 @@
+---
+sidebar_label: Operations Profiling
+sidebar_position: 3
+slug: /operations_profiling
+---
+# Operations Profiling
+
+## Introduction
+
+JuiceFS has a special virtual file named [`.accesslog`](../administration/fault_diagnosis_and_analysis.md#access-log) to track every operation occurred within its client. This file may generate thousands of log entries per second when under pressure, making it hard to find out what is actually going on at a certain time. Thus, we made a simple tool called [`juicefs profile`](../reference/command_reference.md#juicefs-profile) to show an overview of recently completed operations. The basic idea is to aggregate all logs in the past interval and display statistics periodically, like:
+
+![juicefs-profiling](../images/juicefs-profiling.gif)
+
+## Profiling Modes
+
+For now there are 2 modes of profiling: real time and replay.
+
+### Real Time Mode
+
+By executing the following command you can watch real time operations under the mount point:
+
+```bash
+$ juicefs profile MOUNTPOINT
+```
+
+> **Tip**: The result is sorted in a descending order by total time.
+
+### Replay Mode
+
+Running the `profile` command on an existing log file enables the **replay mode**:
+
+```bash
+$ juicefs profile LOGFILE
+```
+
+When debugging or analyzing perfomance issues, it is usually more practical to record access log first and then replay it (multiple times). For example:
+
+```bash
+$ cat /jfs/.accesslog > /tmp/jfs-oplog
+# later
+$ juicefs profile /tmp/jfs-oplog
+```
+
+> **Tip 1**: The replay could be paused anytime by Enter/Return, and continues by pressing it again.
+>
+> **Tip 2**: Setting `--interval 0` will replay the whole log file as fast as possible, and show the result as if it was within one interval.
+
+## Filter
+
+Sometimes we are only interested in a certain user or process, then we can filter others out by specifying its IDs, e.g:
+
+```bash
+$ juicefs profile /tmp/jfs-oplog --uid 12345
+```
+
+For more information, please run `juicefs profile -h`.
diff --git a/docs/en/benchmark/performance_evaluation_guide.md b/docs/en/benchmark/performance_evaluation_guide.md
new file mode 100644
index 0000000..7dfeca4
--- /dev/null
+++ b/docs/en/benchmark/performance_evaluation_guide.md
@@ -0,0 +1,284 @@
+---
+sidebar_label: Performance Evaluation Guide
+sidebar_position: 2
+slug: /performance_evaluation_guide
+---
+# JuiceFS Performance Evaluation Guide
+
+Before starting performance testing, it is a good idea to write down a general description of that usage scenario, including:
+
+1. What is the application for? For example, Apache Spark, PyTorch, or a program you developed yourself.
+2. the resource allocation for running the application, including CPU, memory, network, and node size
+3. the expected size of the data, including the number and volume of files
+4. the file size and access patterns (large or small files, sequential or random reads and writes)
+5. performance requirements, such as the amount of data to be written or read per second, the QPS of the access or the latency of the operation, etc.
+
+The clearer and more detailed these above elements are, the easier it will be to prepare a suitable test plan and the performance indicators that need to be focused on to determine the application requirements for various aspects of the storage system, including JuiceFS metadata configuration, network bandwidth requirements, configuration parameters, etc. Of course, it is not easy to write out all of the above clearly at the beginning, and some of the content can be clarified gradually during the testing process,** but at the end of a complete test, the above usage scenario descriptions and the corresponding test methods, test data, and test results should be complete**.
+
+Even if the above is not yet clear, it does not matter, JuiceFS built-in test tools can be a one-line command to get the core indicators of single-computer benchmark performance. This article will also introduce two JuiceFS built-in performance analysis tools, which can help you analyze the reasons behind JuiceFS performance in a simple and clear way when doing more complex tests.
+
+## Performance Testing Quick Start
+
+The following example describes the basic usage of the bench tool built-in to JuiceFS.
+
+### Working Environment
+
+- Host: Amazon EC2 c5.xlarge one
+- OS: Ubuntu 20.04.1 LTS (Kernel 5.4.0-1029-aws)
+- Metadata Engine: Redis 6.2.3, storage (dir) configured on system disk
+- Object Storage: Amazon S3
+- JuiceFS Version: 0.17-dev (2021-09-23 2ec2badf)
+
+### JuiceFS Bench
+
+The JuiceFS `bench` command can help you quickly complete a single machine performance test to determine if the environment configuration and performance are normal by the test results. Assuming you have mounted JuiceFS to `/mnt/jfs` on your server (if you need help with JuiceFS initialization and mounting, please refer to the [Quick Start Guide](../getting-started/for_local.md), execute the following command (the `-p` parameter is recommended to set the number of CPU cores on the server).
+
+```bash
+juicefs bench /mnt/jfs -p 4
+```
+
+The test results will show each performance indicator as green, yellow or red. If you have red indicators in your results, please check the relevant configuration first, and if you need help, you can describe your problem in detail at [GitHub Discussions](https://github.com/juicedata/juicefs/discussions).
+
+![bench](../images/bench-guide-bench.png)
+
+The JuiceFS `bench` benchmark performance test flows as follows (its logic is very simple, and those interested in the details can look directly at the [source code](https://github.com/juicedata/juicefs/blob/main/cmd/bench.go).
+
+1. N concurrently write 1 large file of 1 GiB each with IO size of 1 MiB
+2. N concurrently read 1 large file of 1 GiB each previously written, IO size 1 MiB
+3. N concurrently write 100 small files of 128 KiB each, IO size is 128 KiB
+4. N concurrently read 100 small files of 128 KiB each written previously, IO size 128 KiB
+5. N concurrently stat 100 each of previously written 128 KiB small files
+6. clean up the temporary directory for testing
+
+The value of the concurrency number N is specified by the `-p` parameter in the `bench` command.
+
+Here's a performance comparison using a few common storage types provided by AWS.
+
+- EFS 1TiB capacity at 150MiB/s read and 50MiB/s write at $0.08/GB-month
+- EBS st1 is a throughput-optimized HDD with a maximum throughput of 500MiB/s, a maximum IOPS (1MiB I/O) of 500, and a maximum capacity of 16TiB, priced at $0.045/GB-month
+- EBS gp2 is a general-purpose SSD with a maximum throughput of 250MiB/s, maximum IOPS (16KiB I/O) of 16,000, and maximum capacity of 16TiB, priced at $0.10/GB-month
+
+It is easy to see that in the above test, JuiceFS has significantly better sequential read and write capabilities than AWS EFS and more throughput than the commonly used EBS, but writing small files is not as fast because every file written needs to be persisted to S3 and there is typically a fixed overhead of 10-30ms for calling the object storage API.
+
+:::note
+The performance of Amazon EFS is linearly related to capacity ([refer to the official documentation](https://docs.aws.amazon.com/efs/latest/ug/performance.html#performancemodes)), which makes it unsuitable for use in high throughput scenarios with small data sizes.
+:::
+
+:::note
+Prices refer to [AWS US East, Ohio Region](https://aws.amazon.com/ebs/pricing/?nc1=h_ls), prices vary slightly by Region.
+:::
+
+:::note
+The above data is from [AWS official documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html), and the performance metrics are maximum values. The actual performance of EBS is related to volume capacity and mounted EC2 instance type, in general The larger the volume, the better the EBS performance will be with the higher specification EC2, but not exceeding the maximum value mentioned above.
+:::
+
+## Performance Observation and Analysis Tools
+
+The next two performance observation and analysis tools are essential tools for testing, using, and tuning JuiceFS.
+
+### JuiceFS Stats
+
+JuiceFS `stats` is a tool for real-time statistics of JuiceFS performance metrics, similar to the `dstat` command on Linux systems, which displays changes in metrics for JuiceFS clients in real time (see [documentation](stats_watcher.md) for detailed). When the `juicefs bench` is running, create a new session and execute the following command.
+
+```bash
+juicefs stats /mnt/jfs --verbosity 1
+```
+
+The results are as follows, which can be more easily understood when viewed against the benchmarking process described above.
+
+![stats](../images/bench-guide-stats.png)
+
+The specific meaning of each of these indicators is as follows:
+
+- usage
+ - cpu: CPU consumed by the JuiceFS process
+ - mem: the physical memory consumed by the JuiceFS process
+ - buf: internal read/write buffer size of JuiceFS process, limited by mount option `--buffer-size`
+ - cache: internal metric, can be simply ignored
+- fuse
+ - ops/lat: number of requests per second processed by the FUSE interface and their average latency (in milliseconds)
+ - read/write: bandwidth value of the FUSE interface to handle read and write requests per second
+- meta
+ - ops/lat: number of requests per second processed by the metadata engine and their average latency (in milliseconds). Please note that some requests that can be processed directly in the cache are not included in the statistics to better reflect the time spent by the client interacting with the metadata engine.
+ - txn/lat: the number of **write transactions** processed by the metadata engine per second and their average latency (in milliseconds). Read-only requests such as `getattr` are only counted as ops and not txn.
+ - retry: the number of **write transactions** that the metadata engine retries per second
+- blockcache
+ - read/write: read/write traffic per second for the client's local data cache
+- object
+ - get/get_c/lat: bandwidth value of object store per second for processing **read requests**, number of requests and their average latency (in milliseconds)
+ - put/put_c/lat: bandwidth value of object store for **write requests** per second, number of requests and their average latency (in milliseconds)
+ - del_c/lat: the number of **delete requests** per second and the average latency (in milliseconds) of the object store
+
+### JuiceFS Profile
+
+JuiceFS `profile` is used to output all access logs of the JuiceFS client in real time, including information about each request. It can also be used to play back and count the JuiceFS access logs, allowing users to visualize the operation of JuiceFS (for detailed description and usage see [documentation](operations_profiling.md)). When executing `juicefs bench`, the following command is executed in another session. When the `juicefs bench` is running, create a new session and execute the following command.
+
+```bash
+cat /mnt/jfs/.accesslog > access.log
+```
+
+where `.accessslog` is a virtual file that normally does not produce any data and only has JuiceFS access log output when it is read (e.g. by executing `cat`). When you are finished use Ctrl-C to end the `cat` command and run.
+
+```bash
+juicefs profile access.log --interval 0
+```
+
+The `---interval` parameter sets the sampling interval for accessing the log, and when set to 0 is used to quickly replay a specified log file to generate statistics, as shown in the following figure.
+
+![profile](../images/bench-guide-profile.png)
+
+From the description of the previous benchmarking process, a total of (1 + 100) * 4 = 404 files were created during this test, and each file went through the process of "Create → Write → Close → Open → Read → Close → Delete", so there are a total of:
+
+- 404 create, open and unlink requests
+- 808 flush requests: flush is automatically invoked whenever a file is closed
+- 33168 write/read requests: each large file writes 1024 1 MiB IOs, while the default maximum value of requests at the FUSE level is 128 KiB, which means that each application IO is split into 8 FUSE requests, so there are (1024 * 8 + 100) * 4 = 33168 requests. The read IO is similar and the count is the same.
+
+All these values correspond exactly to the results of `profile`. This is because JuiceFS `write` writes to the memory buffer first by default and then calls flush to upload data to the object store when the file is closed, as expected.
+
+## Other Test Tool Configuration Examples
+
+### Fio Standalone Performance Test
+
+Fio is a common performance testing tool that can be used to do more complex performance tests after completing the JuiceFS bench.
+
+#### Working Environment
+
+Consistent with the JuiceFS Bench test environment described above.
+
+#### Testing tasks
+
+Perform the following 4 Fio tasks for sequential write, sequential read, random write, and random read tests.
+
+Sequential write
+
+```shell
+fio --name=jfs-test --directory=/mnt/jfs --ioengine=libaio --rw=write --bs=1m --size=1g --numjobs=4 --direct=1 --group_reporting
+```
+
+Sequential read
+
+```bash
+fio --name=jfs-test --directory=/mnt/jfs --ioengine=libaio --rw=read --bs=1m --size=1g --numjobs=4 --direct=1 --group_reporting
+```
+
+Random write
+
+```shell
+fio --name=jfs-test --directory=/mnt/jfs --ioengine=libaio --rw=randwrite --bs=1m --size=1g --numjobs=4 --direct=1 --group_reporting
+```
+
+Random read
+
+```shell
+fio --name=jfs-test --directory=/mnt/jfs --ioengine=libaio --rw=randread --bs=1m --size=1g --numjobs=4 --direct=1 --group_reporting
+```
+
+Parameters description:
+
+- `--name`: user-specified test name, which affects the test file name
+- `--directory`: test directory
+- `--ioengine`: the way to send IO when testing; usually `libaio` is used
+- `--rw`: commonly used are read, write, randread, randwrite, which stand for sequential read/write and random read/write respectively
+- `--bs`: the size of each IO
+- `--size`: the total size of IO per thread; usually equal to the size of the test file
+- `--numjobs`: number of concurrent test threads; by default each thread runs a separate test file
+- `--direct`: add the `O_DIRECT` flag bit when opening the file, without using system buffering, which can make the test results more stable and accurate
+
+The results are as follows:
+
+```bash
+# Sequential
+WRITE: bw=703MiB/s (737MB/s), 703MiB/s-703MiB/s (737MB/s-737MB/s), io=4096MiB (4295MB), run=5825-5825msec
+READ: bw=817MiB/s (856MB/s), 817MiB/s-817MiB/s (856MB/s-856MB/s), io=4096MiB (4295MB), run=5015-5015msec
+
+# Random
+WRITE: bw=285MiB/s (298MB/s), 285MiB/s-285MiB/s (298MB/s-298MB/s), io=4096MiB (4295MB), run=14395-14395msec
+READ: bw=93.6MiB/s (98.1MB/s), 93.6MiB/s-93.6MiB/s (98.1MB/s-98.1MB/s), io=4096MiB (4295MB), run=43773-43773msec
+```
+
+### Vdbench Multi-computer Performance Test
+
+Vdbench is a commonly used file system evaluation tool, and it supports concurrent multi-machine testing very well.
+
+#### Working Environment
+
+Similar to the JuiceFS Bench test environment, except that two more hosts with the same specifications were turned on, for a total of three.
+
+#### Preparation
+
+vdbench needs to be installed in the same path on each node: vdbench
+
+1. Download version 50406 from the [Official Website](https://www.oracle.com/downloads/server-storage/vdbench-downloads.html)
+2. Install Java: `apt-get install openjdk-8-jre`
+3. Verify that vdbench is installed successfully: `./vdbench -t`
+
+Then, assuming the names of the three nodes are node0, node1 and node2, you need to create a configuration file on node0 as follows (to test reading and writing a large number of small files):
+
+```bash
+$ cat jfs-test
+hd=default,vdbench=/root/vdbench50406,user=root
+hd=h0,system=node0
+hd=h1,system=node1
+hd=h2,system=node2
+
+fsd=fsd1,anchor=/mnt/jfs/vdbench,depth=1,width=100,files=3000,size=128k,shared=yes
+
+fwd=default,fsd=fsd1,operation=read,xfersize=128k,fileio=random,fileselect=random,threads=4
+fwd=fwd1,host=h0
+fwd=fwd2,host=h1
+fwd=fwd3,host=h2
+
+rd=rd1,fwd=fwd*,fwdrate=max,format=yes,elapsed=300,interval=1
+```
+
+Parameters description:
+
+- `vdbench=/root/vdbench50406`: specifies the path where the vdbench tool is installed
+- `anchor=/mnt/jfs/vdbench`: specifies the path to run test tasks on each node
+- `depth=1,width=100,files=3000,size=128k`: defines the test task file tree structure, i.e. 100 more directories are created under the test directory, each directory contains 3000 files of 128 KiB size, 300,000 files in total
+- `operation=read,xfersize=128k,fileio=random,fileselect=random`: defines the actual test task, i.e., randomly selecting files to send 128 KiB size read requests
+
+The results are as follows:
+
+```
+FILE_CREATES Files created: 300,000 498/sec
+READ_OPENS Files opened for read activity: 188,317 627/sec
+```
+
+The overall system speed for creating 128 KiB files is 498 files per second and reading files is 627 files per second.
+
+#### Other Reference Examples
+
+For reference, here are some profiles available for simple local evaluation of file system performance; the exact test set size and number of concurrency can be adjusted to suit the actual situation.
+
+##### Sequential reading and writing of large files
+
+The file size is 1GiB, where `fwd1` is a sequential write large file and `fwd2` is a sequential read large file.
+
+```bash
+$ cat local-big
+fsd=fsd1,anchor=/mnt/jfs/local-big,depth=1,width=1,files=4,size=1g,openflags=o_direct
+
+fwd=fwd1,fsd=fsd1,operation=write,xfersize=1m,fileio=sequential,fileselect=sequential,threads=4
+fwd=fwd2,fsd=fsd1,operation=read,xfersize=1m,fileio=sequential,fileselect=sequential,threads=4
+
+rd=rd1,fwd=fwd1,fwdrate=max,format=restart,elapsed=120,interval=1
+rd=rd2,fwd=fwd2,fwdrate=max,format=restart,elapsed=120,interval=1
+```
+
+##### Random reading and writing of small files
+
+The file size is 128KiB, where `fwd1` is a random write small file, `fwd2` is a random read small file, and `fwd3` is a mixed read/write small file (read/write ratio = 7:3).
+
+```bash
+$ cat local-small
+fsd=fsd1,anchor=/mnt/jfs/local-small,depth=1,width=20,files=2000,size=128k,openflags=o_direct
+
+fwd=fwd1,fsd=fsd1,operation=write,xfersize=128k,fileio=random,fileselect=random,threads=4
+fwd=fwd2,fsd=fsd1,operation=read,xfersize=128k,fileio=random,fileselect=random,threads=4
+fwd=fwd3,fsd=fsd1,rdpct=70,xfersize=128k,fileio=random,fileselect=random,threads=4
+
+rd=rd1,fwd=fwd1,fwdrate=max,format=restart,elapsed=120,interval=1
+rd=rd2,fwd=fwd2,fwdrate=max,format=restart,elapsed=120,interval=1
+rd=rd3,fwd=fwd3,fwdrate=max,format=restart,elapsed=120,interval=1
+```
diff --git a/docs/en/benchmark/stats_watcher.md b/docs/en/benchmark/stats_watcher.md
new file mode 100644
index 0000000..f2814a0
--- /dev/null
+++ b/docs/en/benchmark/stats_watcher.md
@@ -0,0 +1,37 @@
+---
+sidebar_label: Performance Statistics Watcher
+sidebar_position: 4
+slug: /stats_watcher
+---
+# JuiceFS Performance Statistics Watcher
+
+JuiceFS pre-defined a lot of monitoring items to show internal performance statistics when the system is running. These items are [exposed](../administration/monitoring.md) by Prometheus API. However, when diagnosing performace issues, users may want a real-time monitorig tool to know what is actually going on within a certain time. Thus, the `stats` command is developed to display selected items every second, similar to the Linux tool `dstat`. The output is like:
+
+![stats_watcher](../images/juicefs_stats_watcher.png)
+
+By default, this command will monitor the JuiceFS process corresponding to the specified mount point, showing the following items:
+
+#### usage
+
+- cpu:CPU usage of the process
+- mem:physical memory used by the process
+- buf:current buffer size of JuiceFS, limited by mount option `--buffer-size`
+
+#### fuse
+
+- ops/lat:number of operations handled by FUSE per second, and the average latency (in milliseconds) of them
+- read/write:read/write bandwidth handled by FUSE
+
+#### meta
+
+- ops/lat:number of metadata operations and the average latency (in milliseconds) of them. Please note operations returned directly in cache are not counted, so that the result is closer to real performance of metadata engines
+
+#### blockcache
+
+- read/write:read/write bandwidth of client local data cache
+
+#### object
+
+- get/put:Get/Put bandwidth between client and object storage
+
+Moreover, users can acquire verbose statistics (like read/write ops and the average latency) by setting `--verbosity 1`, or customize displayed items by changing `--schema`. For more information, please check `juicefs stats -h`.
diff --git a/docs/en/client_compile_and_upgrade.md b/docs/en/client_compile_and_upgrade.md
new file mode 100644
index 0000000..501e284
--- /dev/null
+++ b/docs/en/client_compile_and_upgrade.md
@@ -0,0 +1,40 @@
+# JuiceFS client compilation and upgrade
+
+For general users, it is recommended to directly visit the [releases](https://github.com/juicedata/juicefs/releases) page to download the pre-compiled version for installation and use.
+
+## Compile from source code
+
+If you want to experience the new features of JuiceFS first, you can clone the code from the `main` branch of our Github repository and manually compile the latest client.
+
+### Clone repository
+
+```shell
+$ git clone https://github.com/juicedata/juicefs.git
+```
+
+### Compile
+
+The JuiceFS client is developed in Go language, so before compiling, you must install the dependent tools locally in advance:
+
+- [Go](https://golang.org) 1.16+
+- GCC 5.4+
+
+> **Tip**: For users in China, in order to download the Go modules faster, it is recommended to set the mirror server through the `GOPROXY` environment variable. For example: [Goproxy China](https://github.com/goproxy/goproxy.cn).
+
+Enter the source code directory:
+
+```shell
+$ cd juicefs
+```
+
+Compiling:
+
+```shell
+$ make
+```
+
+After the compilation is successful, you can find the compiled `juicefs` binary program in the current directory.
+
+## JuiceFS client upgrade
+
+The JuiceFS client is a binary file named `juicefs`. You only need to replace the old version with the new version of the binary file when upgrading.
diff --git a/docs/en/community/_adopters.md b/docs/en/community/_adopters.md
new file mode 100644
index 0000000..2a8971f
--- /dev/null
+++ b/docs/en/community/_adopters.md
@@ -0,0 +1,5 @@
+---
+sidebar_label: Adopters
+sidebar_position: 1
+---
+# Adopters
\ No newline at end of file
diff --git a/docs/en/community/_integrations.md b/docs/en/community/_integrations.md
new file mode 100644
index 0000000..259e427
--- /dev/null
+++ b/docs/en/community/_integrations.md
@@ -0,0 +1,5 @@
+---
+sidebar_label: Integrations
+sidebar_position: 2
+---
+# Integrations
\ No newline at end of file
diff --git a/docs/en/community/_roadmap.md b/docs/en/community/_roadmap.md
new file mode 100644
index 0000000..8cd3ef1
--- /dev/null
+++ b/docs/en/community/_roadmap.md
@@ -0,0 +1,5 @@
+---
+sidebar_label: Roadmap
+sidebar_position: 3
+---
+# Roadmap
\ No newline at end of file
diff --git a/docs/en/community/usage_tracking.md b/docs/en/community/usage_tracking.md
new file mode 100644
index 0000000..3fafeef
--- /dev/null
+++ b/docs/en/community/usage_tracking.md
@@ -0,0 +1,14 @@
+---
+sidebar_label: Usage Tracking
+sidebar_position: 4
+---
+
+# Usage Tracking
+
+JuiceFS by default collects and reports **anonymous** usage data. It only collects core metrics (e.g. version number, file system size), no user or any sensitive data will be collected. You could review related code [here](https://github.com/juicedata/juicefs/blob/main/pkg/usage/usage.go).
+
+These data help us understand how the community is using this project. You could disable reporting easily by command line option `--no-usage-report`:
+
+```
+$ juicefs mount --no-usage-report
+```
diff --git a/docs/en/comparison/juicefs_vs_alluxio.md b/docs/en/comparison/juicefs_vs_alluxio.md
new file mode 100644
index 0000000..8739391
--- /dev/null
+++ b/docs/en/comparison/juicefs_vs_alluxio.md
@@ -0,0 +1,72 @@
+# JuiceFS vs. Alluxio
+
+[Alluxio](https://www.alluxio.io) (/əˈlʌksio/) is a data access layer in the big data and machine learning ecosystem. Initially as research project "Tachyon", it was created at the University of California, Berkeley's [AMPLab](https://en.wikipedia.org/wiki/AMPLab) as creator's Ph.D. thesis in 2013. Alluxio was open sourced in 2014.
+
+The following table shows difference of main features between Alluxio and JuiceFS.
+
+| Features | Alluxio | JuiceFS |
+| -------- | ------- | ------- |
+| Storage format | Object | Block |
+| Cache granularity | 64MiB | 4MiB |
+| Multi-tier cache | ✓ | ✓ |
+| Hadoop-compatible | ✓ | ✓ |
+| S3-compatible | ✓ | ✓ |
+| Kubernetes CSI Driver | ✓ | ✓ |
+| Hadoop data locality | ✓ | ✓ |
+| Fully POSIX-compatible | ✕ | ✓ |
+| Atomic metadata operation | ✕ | ✓ |
+| Consistency | ✕ | ✓ |
+| Data compression | ✕ | ✓ |
+| Data encryption | ✕ | ✓ |
+| Zero-effort operation | ✕ | ✓ |
+| Language | Java | Go |
+| Open source license | Apache License 2.0 | Apache License 2.0 |
+| Open source date | 2011 | 2021.1 |
+
+### Storage format
+
+The [storage format](../reference/how_juicefs_store_files.md) of one file in JuiceFS consists of three levels: chunk, slice and block. A file will be split into multiple blocks, and be compressed and encrypted (optional) store into object storage.
+
+Alluxio stores file as object to UFS. The file doesn't be split info blocks like JuiceFS does.
+
+### Cache granularity
+
+The [default block size](../reference/how_juicefs_store_files.md) of JuiceFS is 4MiB, compare to 64MiB of Alluxio, the granularity is smaller. The smaller block size is better for random read (e.g. Parquet and ORC) workload, i.e. cache management will be more efficiency.
+
+### Hadoop-compatible
+
+JuiceFS is [HDFS-compatible](../deployment/hadoop_java_sdk.md). Not only compatible with Hadoop 2.x and Hadoop 3.x, but also variety of components in Hadoop ecosystem.
+
+### Kubernetes CSI Driver
+
+JuiceFS provides [Kubernetes CSI Driver](https://github.com/juicedata/juicefs-csi-driver) to help people who want to use JuiceFS in Kubernetes. Alluxio provides [Kubernetes CSI Driver](https://github.com/Alluxio/alluxio-csi) too, but this project seems like not active maintained and not official supported by Alluxio.
+
+### Fully POSIX-compatible
+
+JuiceFS is [fully POSIX-compatible](../reference/posix_compatibility.md). One pjdfstest from [JD.com](https://www.slideshare.net/Alluxio/using-alluxio-posix-fuse-api-in-jdcom) shows that Alluxio didn't pass the POSIX compatibility test, e.g. Alluxio doesn't support symbolic link, truncate, fallocate, append, xattr, mkfifo, mknod and utimes. Besides the things covered by pjdfstest, JuiceFS also provides close-to-open consistency, atomic metadata operation, mmap, fallocate with punch hole, xattr, BSD locks (flock) and POSIX record locks (fcntl).
+
+### Atomic metadata operation
+
+A metadata operation in Alluxio has two steps: the first step is modify state of Alluxio master, the second step is send request to UFS. As you can see, the metadata operation isn't atomic, its state is unpredictable when the operation is executing or any failure occurs. Alluxio relies on UFS to implement metadata operations, for example rename file operation will become copy and delete operations.
+
+Thanks to [Redis transaction](https://redis.io/topics/transactions), **most of metadata operations of JuiceFS are atomic**, e.g. rename file, delete file, rename directory. You don't have to worry about the consistency and performance.
+
+### Consistency
+
+Alluxio loads metadata from the UFS as needed and it doesn't have information about UFS at startup. By default, Alluxio expects that all modifications to UFS occur through Alluxio. If changes are made to UFS directly, you need sync metadata between Alluxio and UFS either manually or periodically. As ["Atomic metadata operation"](#atomic-metadata-operation) section says, the two steps metadata operation may resulting in inconsistency.
+
+JuiceFS provides strong consistency, both metadata and data. **The metadata service of JuiceFS is the single source of truth, not a mirror of UFS.** The metadata service doesn't rely on object storage to obtain metadata. Object storage just be treated as an unlimited block storage. There isn't any inconsistency between JuiceFS and object storage.
+
+### Data compression
+
+JuiceFS supports use [LZ4](https://lz4.github.io/lz4) or [Zstandard](https://facebook.github.io/zstd) to compress all your data. Alluxio doesn't have this feature.
+
+### Data encryption
+
+JuiceFS supports data encryption in transit and at rest. Alluxio community edition doesn't have this feature, but [enterprise edition](https://docs.alluxio.io/ee/user/stable/en/operation/Security.html#end-to-end-data-encryption) has.
+
+### Zero-effort operation
+
+Alluxio's architecture can be divided into 3 components: master, worker and client. A typical cluster consists of a single leading master, standby masters, a job master, standby job masters, workers, and job workers. You need operation these masters and workers by yourself.
+
+JuiceFS uses Redis or [others](../reference/how_to_setup_metadata_engine.md) as the metadata engine. You could use service managed by public cloud provider easily as JuiceFS's metadata engine. There isn't any operation needed.
diff --git a/docs/en/comparison/juicefs_vs_cephfs.md b/docs/en/comparison/juicefs_vs_cephfs.md
new file mode 100644
index 0000000..b01b34d
--- /dev/null
+++ b/docs/en/comparison/juicefs_vs_cephfs.md
@@ -0,0 +1,66 @@
+# JuiceFS vs. CephFS
+
+## Similarities
+
+Both are highly reliable, high-performance resilient distributed file systems with good POSIX compatibility, and can be tried in a variety of file system scenarios.
+
+## Differences
+
+### System Architecture
+
+Both use an architecture that separates data and metadata, but there are significant differences in component implementation.
+
+#### CephFS
+
+CephFS is a complete and independent system that prefers private cloud deployments; all data and metadata is persisted in Ceph's own storage pool (RADOS Pool).
+
+- Metadata
+ - Service Process (MDS): stateless and theoretically horizontally scalable. There are mature master-slave mechanisms, but multi-master deployments still have performance and stability concerns; production environments typically use one-master-multi-slaves or multi-master static isolation.
+ - Persistent: independent RADOS storage pools, usually with SSD or higher performance hardware storage
+- Data: One or more RADOS storage pools, with different configurations specified by Layout, such as chunk size (default 4 MiB), redundancy (multi-copy, EC), etc.
+- Client: kernel client (kcephfs), user state client (ceph-fuse) and SDKs for C++, Python, etc. based on libcephfs; recently the community has also provided a Windows client (ceph-dokan). There is also a VFS object for Samba and a FSAL module for NFS-Ganesha to be considered in the ecosystem.
+
+#### JuiceFS
+
+JuiceFS mainly implements a libjfs library and FUSE client application, Java SDK, etc. It supports interfacing with various metadata engines and object storage and is suitable for deployment in public, private or hybrid cloud environments.
+
+- Metadata: See [database implementation](../reference/how_to_setup_metadata_engine.md) for details, including:
+ - Redis and various variants of the Redis-compatible protocol (trader
+ - SQL family: MySQL, PostgreSQL, SQLite, etc.
+ - Distributed K/V storage: TiKV is supported and Apple FoundationDB is planned to be supported.
+ - Self-developed engine: JuiceFS fully managed service for use on the public cloud.
+- Data: support for over 30 [object stores](../reference/how_to_setup_object_storage.md) on the public cloud and can also use with MinIO, Ceph RADOS, Ceph RGW, etc.
+- Clients: Unix user state mount, Windows mount, Java SDK with full HDFS semantics compatibility, [Python SDK](https://github.com/megvii-research/juicefs-python) and a built-in S3 gateway.
+
+### Features
+
+| | CephFS | JuiceFS |
+| ----------------------- | ---------- | ------------- |
+| File chunking [1] | ✓ | ✓ |
+| Metadata transactions | ✓ | ✓ |
+| Strong consistency | ✓ | ✓ |
+| Kubernetes CSI Driver | ✓ | ✓ |
+| Hadoop-compatible | ✓ | ✓ |
+| Data compression [2] | ✓ | ✓ |
+| Data encryption [3] | ✓ | ✓ |
+| Snapshot | ✓ | ✕ |
+| Client data caching | ✕ | ✓ |
+| Hadoop data Locality | ✕ | ✓ |
+| S3-compatible | ✕ | ✓ |
+| Quota | Directory level quota | Volume level quota |
+| Languages | C++ | Go |
+| License | LGPLv2.1 & LGPLv3 | Apache License 2.0 |
+
+#### [1] File Chunking
+
+CephFS splits files by [`object_size`](https://docs.ceph.com/en/latest/cephfs/file-layouts/#reading-layouts-with-getfattr) (default 4MiB), and each chunk corresponds to a RADOS object, while JuiceFS splits files by 64MiB chunks, and each chunk is further split into one or more logical slice(s) according to the actual situation when writing. Each Slice is further split into one or more logical Blocks when writing to the object store, and each Block corresponds to one object in the object store. When handling overwrite, CephFS needs to modify the corresponding objects directly, which is a complicated process; especially when the redundancy policy is EC or data compression is enabled, it often needs to read part of the object content first, modify it in memory, and then write it, which will bring a great performance overhead. JuiceFS writes the updated data as new objects and modifies the metadata when overwriting, which greatly improves the performance. Any redundant data that occurs during the process is garbage collected asynchronously.
+
+#### [2] Data Compression
+
+Strictly speaking, CephFS does not provide data compression itself, it actually relies on the RADOS layer BlueStore compression. JuiceFS, on the other hand, can compress data once before the Block is uploaded to the object store, in order to reduce the capacity used in the object storage. In other words, if you use JuiceFS to interface with RADOS, you can compress the Block once before and once after it enters RADOS. Also, as mentioned in **File Chunking**, CephFS does not normally enable BlueStore compression due to performance guarantees for overwrite writes.
+
+#### [3] Data Encryption
+
+Ceph **Messenger v2** supports data encryption at the network transport layer, while the storage layer is similar to compression, relying on the encryption provided at OSD creation.
+
+JuiceFS performs encryption and decryption before uploading objects and after downloading, and is completely transparent on the object storage side.
diff --git a/docs/en/comparison/juicefs_vs_s3ql.md b/docs/en/comparison/juicefs_vs_s3ql.md
new file mode 100644
index 0000000..ced2e2e
--- /dev/null
+++ b/docs/en/comparison/juicefs_vs_s3ql.md
@@ -0,0 +1,113 @@
+# JuiceFS vs. S3QL
+
+Similar to JuiceFS, [S3QL](https://github.com/s3ql/s3ql) is also an open source network file system driven by object storage and database. All data will be split into blocks and stored in object storage services such as, Amazon S3, Backblaze B2, or OpenStack Swift, the corresponding metadata will be stored in the database.
+
+## The same point
+
+- All support the standard POSIX file system interface through the FUSE module, so that massive cloud storage can be mounted locally and used like local storage.
+- All can provide standard file system functions: hard links, symbolic links, extended attributes, file permissions.
+- All support data compression and encryption, but the algorithms used are different.
+
+## Different points
+
+- S3QL only supports SQLite. But JuiceFS supports more databases, such as Redis, TiKV, MySQL, PostgreSQL, and SQLite.
+- S3QL has no distributed capability and **does not** support multi-host shared mounting. JuiceFS is a typical distributed file system. When using a network-based database, it supports multi-host distributed mount read and write.
+- S3QL commits a data block to S3 when it has not been accessed for more than a few seconds. After a file closed or even fsynced, it is only guranteed to stay in system memory, which may result in data loss if node fails. JuiceFS ensures high data durability, uploading all blocks synchronously when a file is closed.
+- S3QL provides data deduplication. Only one copy of the same data is stored, which can reduce the storage usage, but it will also increase the performance overhead of the system. JuiceFS pays more attention to performance, and it is too expensive to perform deduplication on large-scale data, so this function is temporarily not provided.
+- S3QL provides remote synchronous backup of metadata. SQLite databases with metadata will be backed up asynchronously to object storage. JuiceFS mainly uses network databases such as Redis and MySQL, and does not directly provide SQLite database synchronization backup function, but JuiceFS supports metadata import and export, as well as various storage backend synchronization functions, users can easily backup metadata to objects Storage, also supports migration between different databases.
+
+| | **S3QL** | **JuiceFS** |
+| :------------------------ | :-------------------- | :---------------------------- |
+| Metadata engine | SQLite | Redis, MySQL, SQLite, TiKV |
+| Storage engine | Object Storage, Local | Object Storage, WebDAV, Local |
+| Operating system | Unix-like | Linux, macOS, Windows |
+| Compression algorithm | LZMA, bzip2, gzip | lz4, zstd |
+| Encryption algorithm | AES-256 | AES-GCM, RSA |
+| POSIX compatible | ✓ | ✓ |
+| Hard link | ✓ | ✓ |
+| Symbolic link | ✓ | ✓ |
+| Extended attributes | ✓ | ✓ |
+| Standard Unix permissions | ✓ | ✓ |
+| Data block | ✓ | ✓ |
+| Local cache | ✓ | ✓ |
+| Elastic storage | ✓ | ✓ |
+| Metadata backup | ✓ | ✓ |
+| Data deduplication | ✓ | ✕ |
+| Immutable trees | ✓ | ✕ |
+| Snapshots | ✓ | ✕ |
+| Share mount | ✕ | ✓ |
+| Hadoop SDK | ✕ | ✓ |
+| Kubernetes CSI Driver | ✕ | ✓ |
+| S3 gateway | ✕ | ✓ |
+| Language | Python | Go |
+| Open source license | GPLv3 | Apache License 2.0 |
+| Open source date | 2011 | 2021.1 |
+
+## Usage
+
+This part mainly evaluates the ease of installation and use of the two products.
+
+### Installation
+
+During the installation process, we use Rocky Linux 8.4 operating system (kernel version 4.18.0-305.12.1.el8_4.x86_64).
+
+#### S3QL
+
+S3QL is developed in Python and requires python-devel 3.7 and above to be installed. In addition, at least the following dependencies must be met: fuse3-devel, gcc, pyfuse3, sqlite-devel, cryptography, defusedxml, apsw, dugong. In addition, you need to pay special attention to Python's package dependencies and location issues.
+
+S3QL will install 12 binary programs in the system, and each program provides an independent function, as shown in the figure below.
+
+![](../images/s3ql-bin.jpg)
+
+#### JuiceFS
+
+JuiceFS is developed in Go and can be used directly by downloading the pre-compiled binary file. The JuiceFS client has only one binary program `juicefs`, just copy it to any executable path of the system, for example: `/usr/local/bin`.
+
+### Create and Mount a file system
+
+Both S3QL and JuiceFS use database to store metadata. S3QL only supports SQLite databases, and JuiceFS supports databases such as Redis, TiKV, MySQL, MariaDB, PostgreSQL, and SQLite.
+
+Here we use Minio object storage created locally and use them to create a file system separately:
+
+#### S3QL
+
+S3QL uses `mkfs.s3ql` to create a file system:
+
+```shell
+$ mkfs.s3ql --plain --backend-options no-ssl -L s3ql s3c://127.0.0.1:9000/s3ql/
+```
+
+Mount a file system using `mount.s3ql`:
+
+```shell
+$ mount.s3ql --compress none --backend-options no-ssl s3c://127.0.0.1:9000/s3ql/ mnt-s3ql
+```
+
+S3QL needs to interactively provide the access key of the object storage API through the command line when creating and mounting a file system.
+
+#### JuiceFS
+
+JuiceFS uses the `format` subcommand to create a file system:
+
+```shell
+$ juicefs format --storage minio \
+ --bucket http://127.0.0.1:9000/myjfs \
+ --access-key minioadmin \
+ --secret-key minioadmin \
+ sqlite3://myjfs.db \
+ myjfs
+```
+
+Mount a file system using `mount` subcommand:
+
+```shell
+$ sudo juicefs mount -d sqlite3://myjfs.db mnt-juicefs
+```
+
+JuiceFS only sets the object storage API access key when creating a file system, and the relevant information will be written into the metadata engine. After created, there is no need to repeatedly provide the object storage url, access key and other information.
+
+## Summary
+
+**S3QL** adopts the storage structure of object storage + SQLite, and storing the data in blocks can not only improve the read and write efficiency of the file, but also reduce the resource overhead when the file is modified. The advanced features such as snapshots, data deduplication, and data retention, as well as the default data compression and data encryption, making S3QL very suitable for individuals to store files in cloud storage at a lower cost and more securely.
+
+**JuiceFS** supports object storage, HDFS, WebDAV, and local disks as data storage engines, and supports popular databases such as Redis, TiKV, MySQL, MariaDB, PostgreSQL, and SQLite as metadata storage engines. It provides a standard POSIX file system interface through FUSE, and a Java API, which can directly replace HDFS to provide storage for Hadoop. At the same time, it also provides [Kubernetes CSI Driver](https://github.com/juicedata/juicefs-csi-driver), which can be used as the storage layer of Kubernetes for data persistent storage. JucieFS is a file system designed for enterprise-level distributed data storage scenarios. It is widely used in various scenarios such as big data analysis, machine learning, container shared storage, data sharing, and backup.
diff --git a/docs/en/deployment/_share_via_nfs.md b/docs/en/deployment/_share_via_nfs.md
new file mode 100644
index 0000000..ff2d980
--- /dev/null
+++ b/docs/en/deployment/_share_via_nfs.md
@@ -0,0 +1,5 @@
+---
+sidebar_label: Deploy JuiceFS with NFS
+sidebar_position: 5
+---
+# Deploy JuiceFS with NFS
\ No newline at end of file
diff --git a/docs/en/deployment/_share_via_smb.md b/docs/en/deployment/_share_via_smb.md
new file mode 100644
index 0000000..13a6812
--- /dev/null
+++ b/docs/en/deployment/_share_via_smb.md
@@ -0,0 +1,5 @@
+---
+sidebar_label: Deploy JuiceFS with SMB
+sidebar_position: 6
+---
+# Deploy JuiceFS with SMB
\ No newline at end of file
diff --git a/docs/en/deployment/hadoop_java_sdk.md b/docs/en/deployment/hadoop_java_sdk.md
new file mode 100644
index 0000000..7e17b7c
--- /dev/null
+++ b/docs/en/deployment/hadoop_java_sdk.md
@@ -0,0 +1,589 @@
+---
+sidebar_label: Use JuiceFS on Hadoop Ecosystem
+sidebar_position: 3
+slug: /hadoop_java_sdk
+---
+# Use JuiceFS on Hadoop Ecosystem
+
+JuiceFS provides [Hadoop-compatible FileSystem](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/introduction.html) by Hadoop Java SDK. Various applications in the Hadoop ecosystem can smoothly use JuiceFS to store data without changing the code.
+
+## Requirements
+
+### 1. Hadoop and related components
+
+JuiceFS Hadoop Java SDK is compatible with Hadoop 2.x and Hadoop 3.x. As well as variety of components in Hadoop ecosystem.
+
+### 2. User permissions
+
+JuiceFS use local mapping of `user` and `UID`. So, you should [sync all the needed users and their UIDs](../administration/sync_accounts_between_multiple_hosts.md) across the whole Hadoop cluster to avoid permission error. You can also specify a global user list and user group file, please refer to the [relevant configurations](#other-configurations).
+
+### 3. File system
+
+You should first create at least one JuiceFS file system to provide storage for components related to the Hadoop ecosystem through the JuiceFS Java SDK. When deploying the Java SDK, specify the metadata engine address of the created file system in the configuration file.
+
+To create a file system, please refer to [JuiceFS Quick Start Guide](../getting-started/for_local.md).
+
+:::note
+If you want to use JuiceFS in a distributed environment, when creating a file system, please plan the object storage and database to be used reasonably to ensure that they can be accessed by each node in the cluster.
+:::
+
+### 4. Memory
+
+JuiceFS Hadoop Java SDK need extra 4 * [`juicefs.memory-size`](#io-configurations) off-heap memory at most. By default, up to 1.2 GB of additional memory is required (depends on write load).
+
+## Client compilation
+
+:::note
+No matter which system environment the client is compiled for, the compiled JAR file has the same name and can only be deployed in the matching system environment. For example, when compiled in Linux, it can only be used in the Linux environment. In addition, since the compiled package depends on glibc, it is recommended to compile with a lower version system to ensure better compatibility.
+:::
+
+Compilation depends on the following tools:
+
+- [Go](https://golang.org/) 1.16+
+- JDK 8+
+- [Maven](https://maven.apache.org/) 3.3+
+- git
+- make
+- GCC 5.4+
+
+### Linux and macOS
+
+Clone the repository:
+
+```shell
+$ git clone https://github.com/juicedata/juicefs.git
+```
+
+Enter the directory and compile:
+
+:::note
+If Ceph RADOS is used to store data, you need to install `librados-dev` first and [build `libjfs.so`](https://github.com/juicedata/juicefs/blob/main/sdk/java/libjfs/Makefile#L22) with `-tags ceph`.
+:::
+
+```shell
+$ cd juicefs/sdk/java
+$ make
+```
+
+After the compilation, you can find the compiled `JAR` file in the `sdk/java/target` directory, including two versions:
+
+- Contains third-party dependent packages: `juicefs-hadoop-X.Y.Z.jar`
+- Does not include third-party dependent packages: `original-juicefs-hadoop-X.Y.Z.jar`
+
+It is recommended to use a version that includes third-party dependencies.
+
+### Windows
+
+The client used in the Windows environment needs to be obtained through cross-compilation on Linux or macOS. The compilation depends on [mingw-w64](https://www.mingw-w64.org/), which needs to be installed first.
+
+The steps are the same as compiling on Linux or macOS. For example, on the Ubuntu system, install the `mingw-w64` package first to solve the dependency problem:
+
+```shell
+$ sudo apt install mingw-w64
+```
+
+Clone and enter the JuiceFS source code directory, execute the following code to compile:
+
+```shell
+$ cd juicefs/sdk/java
+$ make win
+```
+
+## Deploy the client
+
+To enable each component of the Hadoop ecosystem to correctly identify JuiceFS, the following configurations are required:
+
+1. Place the compiled JAR file and `$JAVA_HOME/lib/tools.jar` into the `classpath` of the component. The installation paths of common big data platforms and components are shown in the table below.
+2. Put JuiceFS configurations into the configuration file of each Hadoop ecosystem component (usually `core-site.xml`), see [Client Configurations](#client-configurations) for details.
+
+It is recommended to place the JAR file in a fixed location, and the other locations are called it through symbolic links.
+
+### Big Data Platforms
+
+| Name | Installing Paths |
+| ----------------- | ------------------------------------------------------------ |
+| CDH | `/opt/cloudera/parcels/CDH/lib/hadoop/lib` `/opt/cloudera/parcels/CDH/spark/jars` `/var/lib/impala` |
+| HDP | `/usr/hdp/current/hadoop-client/lib` `/usr/hdp/current/hive-client/auxlib` `/usr/hdp/current/spark2-client/jars` |
+| Amazon EMR | `/usr/lib/hadoop/lib` `/usr/lib/spark/jars` `/usr/lib/hive/auxlib` |
+| Alibaba Cloud EMR | `/opt/apps/ecm/service/hadoop/*/package/hadoop*/share/hadoop/common/lib` `/opt/apps/ecm/service/spark/*/package/spark*/jars` `/opt/apps/ecm/service/presto/*/package/presto*/plugin/hive-hadoop2` `/opt/apps/ecm/service/hive/*/package/apache-hive*/lib` `/opt/apps/ecm/service/impala/*/package/impala*/lib` |
+| Tencent Cloud EMR | `/usr/local/service/hadoop/share/hadoop/common/lib` `/usr/local/service/presto/plugin/hive-hadoop2` `/usr/local/service/spark/jars` `/usr/local/service/hive/auxlib` |
+| UCloud UHadoop | `/home/hadoop/share/hadoop/common/lib` `/home/hadoop/hive/auxlib` `/home/hadoop/spark/jars` `/home/hadoop/presto/plugin/hive-hadoop2` |
+| Baidu Cloud EMR | `/opt/bmr/hadoop/share/hadoop/common/lib` `/opt/bmr/hive/auxlib` `/opt/bmr/spark2/jars` |
+
+### Community Components
+
+| Name | Installing Paths |
+| ------ | ------------------------------------ |
+| Spark | `${SPARK_HOME}/jars` |
+| Presto | `${PRESTO_HOME}/plugin/hive-hadoop2` |
+| Flink | `${FLINK_HOME}/lib` |
+
+### Client Configurations
+
+Please refer to the following table to set the relevant parameters of the JuiceFS file system and write it into the configuration file, which is generally `core-site.xml`.
+
+#### Core Configurations
+
+| Configuration | Default Value | Description |
+| -------------------------------- | ---------------------------- | ------------------------------------------------------------ |
+| `fs.jfs.impl` | `io.juicefs.JuiceFileSystem` | Specify the storage implementation to be used. By default, `jfs://` scheme is used. If you want to use different scheme (e.g. `cfs://`), just modify it to `fs.cfs.impl`. No matter what sheme you use, it is always access the data in JuiceFS. |
+| `fs.AbstractFileSystem.jfs.impl` | `io.juicefs.JuiceFS` | Specify the storage implementation to be used. By default, `jfs://` scheme is used. If you want to use different scheme (e.g. `cfs://`), just modify it to `fs.AbstractFileSystem.cfs.impl`. No matter what sheme you use, it is always access the data in JuiceFS. |
+| `juicefs.meta` | | Specify the metadata engine address of the pre-created JuiceFS file system. You can configure multiple file systems for the client at the same time through the format of `juicefs.{vol_name}.meta`. Refer to ["Multiple file systems configuration"](#multiple-file-systems-configuration). |
+
+#### Cache Configurations
+
+| Configuration | Default Value | Description |
+| ---------------------------- | ------------- | ------------------------------------------------------------ |
+| `juicefs.cache-dir` | | Directory paths of local cache. Use colon to separate multiple paths. Also support wildcard in path. **It's recommended create these directories manually and set `0777` permission so that different applications could share the cache data.** |
+| `juicefs.cache-size` | 0 | Maximum size of local cache in MiB. The default value is 0, which means that caching is disabled. It's the total size when set multiple cache directories. |
+| `juicefs.cache-full-block` | `true` | Whether cache every read blocks, `false` means only cache random/small read blocks. |
+| `juicefs.free-space` | 0.1 | Min free space ratio of cache directory |
+| `juicefs.attr-cache` | 0 | Expire of attributes cache in seconds |
+| `juicefs.entry-cache` | 0 | Expire of file entry cache in seconds |
+| `juicefs.dir-entry-cache` | 0 | Expire of directory entry cache in seconds |
+| `juicefs.discover-nodes-url` | | The URL to discover cluster nodes, refresh every 10 minutes.
YARN: `yarn` Spark Standalone: `http://spark-master:web-ui-port/json/` Spark ThriftServer: `http://thrift-server:4040/api/v1/applications/` Presto: `http://coordinator:discovery-uri-port/v1/service/presto/` |
+
+#### I/O Configurations
+
+| Configuration | Default Value | Description |
+| ------------------------ | ------------- | ----------------------------------------------- |
+| `juicefs.max-uploads` | 20 | The max number of connections to upload |
+| `juicefs.max-deletes` | 2 | The max number of connections to delete |
+| `juicefs.get-timeout` | 5 | The max number of seconds to download an object |
+| `juicefs.put-timeout` | 60 | The max number of seconds to upload an object |
+| `juicefs.memory-size` | 300 | Total read/write buffering in MiB |
+| `juicefs.prefetch` | 1 | Prefetch N blocks in parallel |
+| `juicefs.upload-limit` | 0 | Bandwidth limit for upload in Mbps |
+| `juicefs.download-limit` | 0 | Bandwidth limit for download in Mbps |
+
+#### Other Configurations
+
+| Configuration | Default Value | Description |
+| ------------------------- | ------------- | ------------------------------------------------------------ |
+| `juicefs.bucket` | | Specify a different endpoint for object storage |
+| `juicefs.debug` | `false` | Whether enable debug log |
+| `juicefs.access-log` | | Access log path. Ensure Hadoop application has write permission, e.g. `/tmp/juicefs.access.log`. The log file will rotate automatically to keep at most 7 files. |
+| `juicefs.superuser` | `hdfs` | The super user |
+| `juicefs.users` | `null` | The path of username and UID list file, e.g. `jfs://name/etc/users`. The file format is `:`, one user per line. |
+| `juicefs.groups` | `null` | The path of group name, GID and group members list file, e.g. `jfs://name/etc/groups`. The file format is `::,`, one group per line. |
+| `juicefs.umask` | `null` | The umask used when creating files and directories (e.g. `0022`), default value is `fs.permissions.umask-mode`. |
+| `juicefs.push-gateway` | | [Prometheus Pushgateway](https://github.com/prometheus/pushgateway) address, format is `:`. |
+| `juicefs.push-interval` | 10 | Prometheus push interval in seconds |
+| `juicefs.push-auth` | | [Prometheus basic auth](https://prometheus.io/docs/guides/basic-auth) information, format is `:`. |
+| `juicefs.fast-resolve` | `true` | Whether enable faster metadata lookup using Redis Lua script |
+| `juicefs.no-usage-report` | `false` | Whether disable usage reporting. JuiceFS only collects anonymous usage data (e.g. version number), no user or any sensitive data will be collected. |
+
+#### Multiple file systems configuration
+
+When multiple JuiceFS file systems need to be used at the same time, all the above configuration items can be specified for a specific file system. You only need to put the file system name in the middle of the configuration item, such as `jfs1` and `jfs2` in the following example:
+
+```xml
+
+ juicefs.jfs1.meta
+ redis://jfs1.host:port/1
+
+
+ juicefs.jfs2.meta
+ redis://jfs2.host:port/1
+
+```
+
+#### Configuration Example
+
+The following is a commonly used configuration example. Please replace the `{HOST}`, `{PORT}` and `{DB}` variables in the `juicefs.meta` configuration with actual values.
+
+```xml
+
+ fs.jfs.impl
+ io.juicefs.JuiceFileSystem
+
+
+ fs.AbstractFileSystem.jfs.impl
+ io.juicefs.JuiceFS
+
+
+ juicefs.meta
+ redis://{HOST}:{PORT}/{DB}
+
+
+ juicefs.cache-dir
+ /data*/jfs
+
+
+ juicefs.cache-size
+ 1024
+
+
+ juicefs.access-log
+ /tmp/juicefs.access.log
+
+```
+
+## Configuration in Hadoop
+
+Please refer to the aforementioned configuration tables and add configuration parameters to the Hadoop configuration file `core-site.xml`.
+
+### CDH6
+
+If you are using CDH 6, in addition to modifying `core-site`, you also need to modify `mapreduce.application.classpath` through the YARN service interface, adding:
+
+```shell
+$HADOOP_COMMON_HOME/lib/juicefs-hadoop.jar
+```
+
+### HDP
+
+In addition to modifying `core-site`, you also need to modify the configuration `mapreduce.application.classpath` through the MapReduce2 service interface and add it at the end (variables do not need to be replaced):
+
+```shell
+/usr/hdp/${hdp.version}/hadoop/lib/juicefs-hadoop.jar
+```
+
+### Flink
+
+Add configuration parameters to `conf/flink-conf.yaml`. If you only use JuiceFS in Flink, you don't need to configure JuiceFS in the Hadoop environment, you only need to configure the Flink client.
+
+### Hudi
+
+:::note
+The latest version of Hudi (v0.10.0) does not yet support JuiceFS, you need to compile the latest master branch yourself.
+:::
+
+Please refer to ["Hudi Official Documentation"](https://hudi.apache.org/docs/next/jfs_hoodie) to learn how to configure JuiceFS.
+
+### Restart Services
+
+When the following components need to access JuiceFS, they should be restarted.
+
+:::note
+Before restart, you need to confirm JuiceFS related configuration has been written to the configuration file of each component, usually you can find them in `core-site.xml` on the machine where the service of the component was deployed.
+:::
+
+| Components | Services |
+| ---------- | -------------------------- |
+| Hive | HiveServer Metastore |
+| Spark | ThriftServer |
+| Presto | Coordinator Worker |
+| Impala | Catalog Server Daemon |
+| HBase | Master RegionServer |
+
+HDFS, Hue, ZooKeeper and other services don't need to be restarted.
+
+When `Class io.juicefs.JuiceFileSystem not found` or `No FilesSystem for scheme: jfs` exceptions was occurred after restart, reference [FAQ](#faq).
+
+## Environmental Verification
+
+After the deployment of the JuiceFS Java SDK, the following methods can be used to verify the success of the deployment.
+
+### Hadoop
+
+```bash
+$ hadoop fs -ls jfs://{JFS_NAME}/
+```
+
+:::info
+The `JFS_NAME` is the volume name when you format JuiceFS file system.
+:::
+
+### Hive
+
+```sql
+CREATE TABLE IF NOT EXISTS person
+(
+ name STRING,
+ age INT
+) LOCATION 'jfs://{JFS_NAME}/tmp/person';
+```
+
+## Monitoring metrics collection
+
+Please see the ["Monitoring"](../administration/monitoring.md) documentation to learn how to collect and display JuiceFS monitoring metrics.
+
+## Benchmark
+
+Here are a series of methods to use the built-in stress testing tool of the JuiceFS client to test the performance of the client environment that has been successfully deployed.
+
+
+### 1. Local Benchmark
+
+#### Metadata
+
+- **create**
+
+ ```shell
+ hadoop jar juicefs-hadoop.jar nnbench create -files 10000 -baseDir jfs://{JFS_NAME}/tmp/benchmarks/NNBench -local
+ ```
+
+ This command will create 10000 empty files
+
+- **open**
+
+ ```shell
+ hadoop jar juicefs-hadoop.jar nnbench open -files 10000 -baseDir jfs://{JFS_NAME}/tmp/benchmarks/NNBench -local
+ ```
+
+ This command will open 10000 files without reading data
+
+- **rename**
+
+ ```shell
+ hadoop jar juicefs-hadoop.jar nnbench rename -files 10000 -baseDir jfs://{JFS_NAME}/tmp/benchmarks/NNBench -local
+ ```
+
+- **delete**
+
+ ```shell
+ hadoop jar juicefs-hadoop.jar nnbench delete -files 10000 -baseDir jfs://{JFS_NAME}/tmp/benchmarks/NNBench -local
+ ```
+
+- **For reference**
+
+ | Operation | TPS | Latency (ms) |
+ | --------- | ---- | ------------ |
+ | create | 644 | 1.55 |
+ | open | 3467 | 0.29 |
+ | rename | 483 | 2.07 |
+ | delete | 506 | 1.97 |
+
+#### I/O Performance
+
+- **sequential write**
+
+ ```shell
+ hadoop jar juicefs-hadoop.jar dfsio -write -size 20000 -baseDir jfs://{JFS_NAME}/tmp/benchmarks/DFSIO -local
+ ```
+
+- **sequential read**
+
+ ```shell
+ hadoop jar juicefs-hadoop.jar dfsio -read -size 20000 -baseDir jfs://{JFS_NAME}/tmp/benchmarks/DFSIO -local
+ ```
+
+ When run the cmd for the second time, the result may be much better than the first run. It's because the data was cached in memory, just clean the local disk cache.
+
+- **For reference**
+
+ | Operation | Throughput (MB/s) |
+ | --------- | ----------------- |
+ | write | 647 |
+ | read | 111 |
+
+If the network bandwidth of the machine is relatively low, it can generally reach the network bandwidth bottleneck.
+
+### 2. Distributed Benchmark
+
+The following command will start the MapReduce distributed task to test the metadata and IO performance. During the test, it is necessary to ensure that the cluster has sufficient resources to start the required map tasks.
+
+Computing resources used in this test:
+
+- **Server**: 4 cores and 32 GB memory, burst bandwidth 5Gbit/s x 3
+- **Database**: Alibaba Cloud Redis 5.0 Community 4G Master-Slave Edition
+
+#### Metadata
+
+- **create**
+
+ ```shell
+ hadoop jar juicefs-hadoop.jar nnbench create -maps 10 -threads 10 -files 1000 -baseDir jfs://{JFS_NAME}/tmp/benchmarks/NNBench
+ ```
+
+ 10 map task, each has 10 threads, each thread create 1000 empty file. 100000 files in total
+
+- **open**
+
+ ```shell
+ hadoop jar juicefs-hadoop.jar nnbench open -maps 10 -threads 10 -files 1000 -baseDir jfs://{JFS_NAME}/tmp/benchmarks/NNBench
+ ```
+
+ 10 map task, each has 10 threads, each thread open 1000 file. 100000 files in total
+
+- **rename**
+
+ ```shell
+ hadoop jar juicefs-hadoop.jar nnbench rename -maps 10 -threads 10 -files 1000 -baseDir jfs://{JFS_NAME}/tmp/benchmarks/NNBench
+ ```
+
+ 10 map task, each has 10 threads, each thread rename 1000 file. 100000 files in total
+
+- **delete**
+
+ ```shell
+ hadoop jar juicefs-hadoop.jar nnbench delete -maps 10 -threads 10 -files 1000 -baseDir jfs://{JFS_NAME}/tmp/benchmarks/NNBench
+ ```
+
+ 10 map task, each has 10 threads, each thread delete 1000 file. 100000 files in total
+
+- **For reference**
+
+ - 10 threads
+
+ | Operation | IOPS | Latency (ms) |
+ | --------- | ---- | ------------ |
+ | create | 4178 | 2.2 |
+ | open | 9407 | 0.8 |
+ | rename | 3197 | 2.9 |
+ | delete | 3060 | 3.0 |
+
+ - 100 threads
+
+ | Operation | IOPS | Latency (ms) |
+ | --------- | ---- | ------------ |
+ | create | 11773 | 7.9 |
+ | open | 34083 | 2.4 |
+ | rename | 8995 | 10.8 |
+ | delete | 7191 | 13.6 |
+
+#### I/O Performance
+
+- **sequential write**
+
+ ```shell
+ hadoop jar juicefs-hadoop.jar dfsio -write -maps 10 -size 10000 -baseDir jfs://{JFS_NAME}/tmp/benchmarks/DFSIO
+ ```
+
+ 10 map task, each task write 10000MB random data sequentially
+
+- **sequential read**
+
+ ```shell
+ hadoop jar juicefs-hadoop.jar dfsio -read -maps 10 -size 10000 -baseDir jfs://{JFS_NAME}/tmp/benchmarks/DFSIO
+ ```
+
+ 10 map task, each task read 10000MB random data sequentially
+
+
+- **For reference**
+
+ | Operation | Average throughput (MB/s) | Total Throughput (MB/s) |
+ | --------- | ------------------------- | ----------------------- |
+ | write | 198 | 1835 |
+ | read | 124 | 1234 |
+
+### 3. TPC-DS
+
+The test dataset is 100GB in size, and both Parquet and ORC file formats are tested.
+
+This test only tests the first 10 queries.
+
+Spark Thrift JDBC/ODBC Server is used to start the Spark resident process and then submit the task via Beeline connection.
+
+#### Test Hardware
+
+| Node Category | Instance Type | CPU | Memory | Disk | Number |
+| ------------- | ------------- | --- | ------ | ---- | ------ |
+| Master | Alibaba Cloud ecs.r6.xlarge | 4 | 32GiB | System Disk: 100GiB | 1 |
+| Core | Alibaba Cloud ecs.r6.xlarge | 4 | 32GiB | System Disk: 100GiB Data Disk: 500GiB Ultra Disk x 2 | 3 |
+
+#### Software Configuration
+
+##### Spark Thrift JDBC/ODBC Server
+
+```shell
+${SPARK_HOME}/sbin/start-thriftserver.sh \
+ --master yarn \
+ --driver-memory 8g \
+ --executor-memory 10g \
+ --executor-cores 3 \
+ --num-executors 3 \
+ --conf spark.locality.wait=100 \
+ --conf spark.sql.crossJoin.enabled=true \
+ --hiveconf hive.server2.thrift.port=10001
+```
+
+##### JuiceFS Cache Configurations
+
+The 2 data disks of Core node are mounted in the `/data01` and `/data02` directories, and `core-site.xml` is configured as follows:
+
+```xml
+
+ juicefs.cache-size
+ 200000
+
+
+ juicefs.cache-dir
+ /data*/jfscache
+
+
+ juicefs.cache-full-block
+ false
+
+
+ juicefs.discover-nodes-url
+ yarn
+
+
+ juicefs.attr-cache
+ 3
+
+
+ juicefs.entry-cache
+ 3
+
+
+ juicefs.dir-entry-cache
+ 3
+
+```
+
+#### Test
+
+The task submission command is as follows:
+
+```shell
+${SPARK_HOME}/bin/beeline -u jdbc:hive2://localhost:10001/${DATABASE} \
+ -n hadoop \
+ -f query{i}.sql
+```
+
+#### Results
+
+JuiceFS can use local disk as a cache to accelerate data access, the following data is the result (in seconds) after 4 runs using Redis and TiKV as the metadata engine of JuiceFS respectively.
+
+##### ORC
+
+| Queries | JuiceFS (Redis) | JuiceFS (TiKV) | HDFS |
+| ------- | --------------- | -------------- | ---- |
+| q1 | 20 | 20 | 20 |
+| q2 | 28 | 33 | 26 |
+| q3 | 24 | 27 | 28 |
+| q4 | 300 | 309 | 290 |
+| q5 | 116 | 117 | 91 |
+| q6 | 37 | 42 | 41 |
+| q7 | 24 | 28 | 23 |
+| q8 | 13 | 15 | 16 |
+| q9 | 87 | 112 | 89 |
+| q10 | 23 | 24 | 22 |
+
+![orc](../images/spark_ql_orc.png)
+
+##### Parquet
+
+| Queries | JuiceFS (Redis) | JuiceFS (TiKV) | HDFS |
+| ------- | --------------- | -------------- | ---- |
+| q1 | 33 | 35 | 39 |
+| q2 | 28 | 32 | 31 |
+| q3 | 23 | 25 | 24 |
+| q4 | 273 | 284 | 266 |
+| q5 | 96 | 107 | 94 |
+| q6 | 36 | 35 | 42 |
+| q7 | 28 | 30 | 24 |
+| q8 | 11 | 12 | 14 |
+| q9 | 85 | 97 | 77 |
+| q10 | 24 | 28 | 38 |
+
+![parquet](../images/spark_sql_parquet.png)
+
+
+## FAQ
+
+### 1. `Class io.juicefs.JuiceFileSystem not found` exception
+
+It means JAR file was not loaded, you can verify it by `lsof -p {pid} | grep juicefs`.
+
+You should check whether the JAR file was located properly, or other users have the read permission.
+
+Some Hadoop distribution also need to modify `mapred-site.xml` and put the JAR file location path to the end of the parameter `mapreduce.application.classpath`.
+
+### 2. `No FilesSystem for scheme: jfs` exception
+
+It means JuiceFS Hadoop Java SDK was not configured properly, you need to check whether there is JuiceFS related configuration in the `core-site.xml` of the component configuration.
diff --git a/docs/en/deployment/how_to_use_on_kubernetes.md b/docs/en/deployment/how_to_use_on_kubernetes.md
new file mode 100644
index 0000000..0f72964
--- /dev/null
+++ b/docs/en/deployment/how_to_use_on_kubernetes.md
@@ -0,0 +1,370 @@
+---
+sidebar_label: Use JuiceFS on Kubernetes
+sidebar_position: 2
+slug: /how_to_use_on_kubernetes
+---
+# Use JuiceFS on Kubernetes
+
+JuiceFS is very suitable as a storage layer for Kubernetes clusters, and there are currently two common usages.
+
+## JuiceFS CSI Driver
+
+[JuiceFS CSI Driver](https://github.com/juicedata/juicefs-csi-driver) follows [CSI](https://github.com/container-storage-interface/spec/blob/master/spec.md) specification, which implements the interface between the container orchestration system and the JuiceFS file system, and supports the dynamic configuration of JuiceFS volumes for use by Pod.
+
+### Prerequisites
+
+- Kubernetes 1.14+
+
+### Installation
+
+JuiceFS CSI Driver has the following two installation methods.
+
+#### Install with Helm
+
+Helm is the package manager of Kubernetes, and Chart is the package managed by Helm. You can think of it as the equivalent of Homebrew formula, APT dpkg, or YUM RPM in Kubernetes.
+
+This installation method requires Helm **3.1.0** and above. For the specific installation method, please refer to ["Helm Installation Guide"](https://github.com/helm/helm#install).
+
+1. Prepare a configuration file for setting the basic information of the storage class, for example: `values.yaml`, copy and complete the following configuration information. Among them, the `backend` part is JuiceFS file system related information, you can refer to [JuiceFS Quick Start Guide](../getting-started/for_local.md) for related content. If you are using a JuiceFS volume that has been created in advance, you only need to fill in the two items `name` and `metaurl`. The `mountPod` part can set the resource configuration of CPU and memory for the Pod using this driver. Unneeded items can be deleted, or its value can be left blank.
+
+```yaml
+storageClasses:
+- name: juicefs-sc
+ enabled: true
+ reclaimPolicy: Retain
+ backend:
+ name: "test"
+ metaurl: "redis://juicefs.afyq4z.0001.use1.cache.amazonaws.com/3"
+ storage: "s3"
+ accessKey: ""
+ secretKey: ""
+ bucket: "https://juicefs-test.s3.us-east-1.amazonaws.com"
+ mountPod:
+ resources:
+ limits:
+ cpu: ""
+ memory: ""
+ requests:
+ cpu: ""
+ memory: ""
+```
+
+On cloud platforms that support "role management", you can assign "service role" to Kubernetes nodes to achieve key-free access to the object storage API. In this case, there is no need to set the `accessKey` and `secretKey` in the configuration file.
+
+2. Execute the following three commands in sequence to deploy JuiceFS CSI Driver through Helm.
+
+```shell
+$ helm repo add juicefs-csi-driver https://juicedata.github.io/juicefs-csi-driver/
+$ helm repo update
+$ helm install juicefs-csi-driver juicefs-csi-driver/juicefs-csi-driver -n kube-system -f ./values.yaml
+```
+
+3. Check the deployment
+
+- **Check Pods**: the deployment will launch a `StatefulSet` named `juicefs-csi-controller` with replica `1` and a `DaemonSet` named `juicefs-csi-node`, so run `kubectl -n kube-system get pods -l app.kubernetes.io/name=juicefs-csi-driver` should see `n+1` (where `n` is the number of worker nodes of the Kubernetes cluster) pods is running. For example:
+
+```sh
+$ kubectl -n kube-system get pods -l app.kubernetes.io/name=juicefs-csi-driver
+NAME READY STATUS RESTARTS AGE
+juicefs-csi-controller-0 3/3 Running 0 22m
+juicefs-csi-node-v9tzb 3/3 Running 0 14m
+```
+
+- **Check secret**: `kubectl -n kube-system describe secret juicefs-sc-secret` will show the secret with above `backend` fields in `values.yaml`:
+
+```sh
+$ kubectl -n kube-system describe secret juicefs-sc-secret
+Name: juicefs-sc-secret
+Namespace: kube-system
+Labels: app.kubernetes.io/instance=juicefs-csi-driver
+ app.kubernetes.io/managed-by=Helm
+ app.kubernetes.io/name=juicefs-csi-driver
+ app.kubernetes.io/version=0.7.0
+ helm.sh/chart=juicefs-csi-driver-0.1.0
+Annotations: meta.helm.sh/release-name: juicefs-csi-driver
+ meta.helm.sh/release-namespace: default
+
+Type: Opaque
+
+Data
+====
+access-key: 0 bytes
+bucket: 47 bytes
+metaurl: 54 bytes
+name: 4 bytes
+secret-key: 0 bytes
+storage: 2 bytes
+```
+
+- **Check storage class**: `kubectl get sc juicefs-sc` will show the storage class like this:
+
+```sh
+$ kubectl get sc juicefs-sc
+NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
+juicefs-sc csi.juicefs.com Retain Immediate false 69m
+```
+
+#### Install with kubectl
+
+Since Kubernetes will discard some old APIs during the version change process, you need to select the applicable deployment file according to the version of Kubernetes you are using:
+
+**Kubernetes v1.18 and above**
+
+```shell
+$ kubectl apply -f https://raw.githubusercontent.com/juicedata/juicefs-csi-driver/master/deploy/k8s.yaml
+```
+
+**Version below Kubernetes v1.18**
+
+```shell
+$ kubectl apply -f https://raw.githubusercontent.com/juicedata/juicefs-csi-driver/master/deploy/k8s_before_v1_18.yaml
+```
+
+**Create storage class**
+
+Create a configuration file with reference to the following content, for example: `juicefs-sc.yaml`, fill in the configuration information of the JuiceFS file system in the `stringData` section:
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: juicefs-sc-secret
+ namespace: kube-system
+type: Opaque
+stringData:
+ name: "test"
+ metaurl: "redis://juicefs.afyq4z.0001.use1.cache.amazonaws.com/3"
+ storage: "s3"
+ bucket: "https://juicefs-test.s3.us-east-1.amazonaws.com"
+ access-key: ""
+ secret-key: ""
+---
+apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+ name: juicefs-sc
+provisioner: csi.juicefs.com
+reclaimPolicy: Retain
+volumeBindingMode: Immediate
+parameters:
+ csi.storage.k8s.io/node-publish-secret-name: juicefs-sc-secret
+ csi.storage.k8s.io/node-publish-secret-namespace: kube-system
+ csi.storage.k8s.io/provisioner-secret-name: juicefs-sc-secret
+ csi.storage.k8s.io/provisioner-secret-namespace: kube-system
+```
+
+Execute the command to deploy the storage class:
+
+```shell
+$ kubectl apply -f ./juicefs-sc.yaml
+```
+
+In addition, you can also extract the Secret part of the above configuration file and create it on the command line through `kubectl`:
+
+```bash
+$ kubectl -n kube-system create secret generic juicefs-sc-secret \
+ --from-literal=name=test \
+ --from-literal=metaurl=redis://juicefs.afyq4z.0001.use1.cache.amazonaws.com/3 \
+ --from-literal=storage=s3 \
+ --from-literal=bucket=https://juicefs-test.s3.us-east-1.amazonaws.com \
+ --from-literal=access-key="" \
+ --from-literal=secret-key=""
+```
+
+In this way, the storage class configuration file `juicefs-sc.yaml` should look like the following:
+
+```yaml
+apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+ name: juicefs-sc
+provisioner: csi.juicefs.com
+reclaimPolicy: Retain
+parameters:
+ csi.storage.k8s.io/node-publish-secret-name: juicefs-sc-secret
+ csi.storage.k8s.io/node-publish-secret-namespace: kube-system
+ csi.storage.k8s.io/provisioner-secret-name: juicefs-sc-secret
+ csi.storage.k8s.io/provisioner-secret-namespace: kube-system
+```
+
+Then deploy the storage class through `kubectl apply`:
+
+```shell
+$ kubectl apply -f ./juicefs-sc.yaml
+```
+
+### Use JuiceFS
+
+JuiceFS CSI Driver supports both static and dynamic PV. You can either manually assign the PV created in advance to Pods, or you can dynamically create volumes through PVC when Pods are deployed.
+
+For example, you can use the following configuration to create a configuration file named `development.yaml`, which creates a persistent volume for the Nginx container through PVC and mounts it to the container's `/config` directory:
+
+```yaml
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: web-pvc
+spec:
+ accessModes:
+ - ReadWriteMany
+ resources:
+ requests:
+ storage: 10Pi
+ storageClassName: juicefs-sc
+---
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: nginx-run
+spec:
+ selector:
+ matchLabels:
+ app: nginx
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ containers:
+ - name: nginx
+ image: linuxserver/nginx
+ ports:
+ - containerPort: 80
+ volumeMounts:
+ - mountPath: /config
+ name: web-data
+ volumes:
+ - name: web-data
+ persistentVolumeClaim:
+ claimName: web-pvc
+```
+
+Deploy Pods via `kubectl apply`:
+
+```
+$ kubectl apply -f ./development.yaml
+```
+
+After the deployment is successful, check the pods status:
+
+```shell
+$ kubectl get pods
+NAME READY STATUS RESTARTS AGE
+nginx-run-7d6fb7d6df-cfsvp 1/1 Running 0 21m
+```
+
+We can simply use the `kubectl exec` command to view the file system mount status in the container:
+
+```shell
+$ kubectl exec nginx-run-7d6fb7d6df-cfsvp -- df -Th
+Filesystem Type Size Used Avail Use% Mounted on
+overlay overlay 40G 7.0G 34G 18% /
+tmpfs tmpfs 64M 0 64M 0% /dev
+tmpfs tmpfs 3.8G 0 3.8G 0% /sys/fs/cgroup
+JuiceFS:jfs fuse.juicefs 1.0P 180M 1.0P 1% /config
+...
+```
+
+From the results returned from the container, we can see that it is in full compliance with expectations, and the JuiceFS volume has been mounted to the `/config` directory we specified.
+
+When a PV is dynamically created through PVC as above, JuiceFS will create a directory with the same name as the PV in the root directory of the file system and mount it to the container. Execute the following command to view all PVs in the cluster:
+
+```shell
+$ kubectl get pv -A
+NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
+pvc-b670c8a1-2962-497c-afa2-33bc8b8bb05d 10Pi RWX Retain Bound default/web-pvc juicefs-sc 34m
+```
+
+Mount the same JuiceFS storage through an external host, and you can see the PVs currently in use and the PVs that have been created.
+
+![](../images/pv-on-juicefs.png)
+
+For more details about JuiceFS CSI Driver please refer to [project homepage](https://github.com/juicedata/juicefs-csi-driver).
+
+### Create more JuiceFS storage classes
+
+You can repeat the previous steps according to actual needs to create any number of storage classes through JuiceFS CSI Driver. But pay attention to modifying the name of the storage class and the configuration information of the JuiceFS file system to avoid conflicts with the created storage class. For example, when using Helm, you can create a configuration file named `jfs2.yaml`:
+
+```yaml
+storageClasses:
+- name: jfs-sc2
+ enabled: true
+ reclaimPolicy: Retain
+ backend:
+ name: "jfs-2"
+ metaurl: "redis://example.abc.0001.use1.cache.amazonaws.com/3"
+ storage: "s3"
+ accessKey: ""
+ secretKey: ""
+ bucket: "https://jfs2.s3.us-east-1.amazonaws.com"
+```
+
+Execute the Helm command to deploy:
+
+```shell
+$ helm repo add juicefs-csi-driver https://juicedata.github.io/juicefs-csi-driver/
+$ helm repo update
+$ helm upgrade juicefs-csi-driver juicefs-csi-driver/juicefs-csi-driver --install -f ./jfs2.yaml
+```
+
+View the storage classes in the cluster:
+
+```shell
+$ kubectl get sc
+NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
+juicefs-sc csi.juicefs.com Retain Immediate false 88m
+juicefs-sc2 csi.juicefs.com Retain Immediate false 13m
+standard (default) k8s.io/minikube-hostpath Delete Immediate false 128m
+```
+
+### Monitoring
+
+Please see the ["Monitoring"](../administration/monitoring.md) documentation to learn how to collect and display JuiceFS monitoring metrics.
+
+## Mount JuiceFS in the container
+
+In some cases, you may need to mount JuiceFS volume directly in the container, which requires the use of the JuiceFS client in the container. You can refer to the following `Dockerfile` sample to integrate the JuiceFS client into the application image:
+
+```dockerfile
+FROM alpine:latest
+LABEL maintainer="Juicedata "
+
+# Install JuiceFS client
+RUN apk add --no-cache curl && \
+ JFS_LATEST_TAG=$(curl -s https://api.github.com/repos/juicedata/juicefs/releases/latest | grep 'tag_name' | cut -d '"' -f 4 | tr -d 'v') && \
+ wget "https://github.com/juicedata/juicefs/releases/download/v${JFS_LATEST_TAG}/juicefs-${JFS_LATEST_TAG}-linux-amd64.tar.gz" && \
+ tar -zxf "juicefs-${JFS_LATEST_TAG}-linux-amd64.tar.gz" && \
+ install juicefs /usr/bin && \
+ rm juicefs "juicefs-${JFS_LATEST_TAG}-linux-amd64.tar.gz" && \
+ rm -rf /var/cache/apk/* && \
+ apk del curl
+
+ENTRYPOINT ["/usr/bin/juicefs", "mount"]
+```
+
+Since JuiceFS needs to use the FUSE device to mount the file system, it is necessary to allow the container to run in a privileged mode when creating a Pod:
+
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: nginx-run
+spec:
+ selector:
+ matchLabels:
+ app: nginx
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ containers:
+ - name: nginx
+ image: linuxserver/nginx
+ ports:
+ - containerPort: 80
+ securityContext:
+ privileged: true
+```
+
+> ⚠️ **Risk Warning**: After enabling the privileged mode of `privileged: true`, the container has access to all devices of the host, that is, it has full control of the host's kernel. Improper use will bring serious safety hazards, please conduct a sufficient safety assessment before using this method.
diff --git a/docs/en/deployment/juicefs_on_docker.md b/docs/en/deployment/juicefs_on_docker.md
new file mode 100644
index 0000000..da13af9
--- /dev/null
+++ b/docs/en/deployment/juicefs_on_docker.md
@@ -0,0 +1,107 @@
+---
+sidebar_label: Use JuiceFS on Docker
+sidebar_position: 1
+slug: /juicefs_on_docker
+---
+# Use JuiceFS on Docker
+
+There are three ways to use JuiceFS on Docker:
+
+## 1. Volume Mapping
+
+This method is to map the directories in the JuiceFS mount point to the Docker container. For example, the JuiceFS storage is mounted in the `/mnt/jfs` directory. When creating a container, you can map JuiceFS storage to the Docker container as follows:
+
+```shell
+$ sudo docker run -d --name nginx \
+ -v /mnt/jfs/html:/usr/share/nginx/html \
+ -p 8080:80 \
+ nginx
+```
+
+By default, only the user who mounts the JuiceFS storage has the access permissions for the storage. When you need to map the JuiceFS storage to a Docker container, if you are not using the root identity to mount the JuiceFS storage, you need to turn on the FUSE `user_allow_other` first, and then re-mount the JuiceFS with `-o allow_other` option.
+
+> **Note**: JuiceFS storage mounted with root user identity or `sudo` will automatically add the `allow_other` option, no manual setting is required.
+
+### FUSE Setting
+
+By default, the `allow_other` option is only allowed to be used by the root user. In order to allow other users to use this mount option, the FUSE configuration file needs to be modified.
+
+### Change the configuration file
+
+Edit the configuration file of FUSE, usually `/etc/fuse.conf`:
+
+```sh
+$ sudo nano /etc/fuse.conf
+```
+
+Delete the `# ` symbol in front of `user_allow_other` in the configuration file, and modify it as follows:
+
+```conf
+# /etc/fuse.conf - Configuration file for Filesystem in Userspace (FUSE)
+
+# Set the maximum number of FUSE mounts allowed to non-root users.
+# The default is 1000.
+#mount_max = 1000
+
+# Allow non-root users to specify the allow_other or allow_root mount options.
+user_allow_other
+```
+
+#### Re-mount JuiceFS
+
+After the `allow_other` of FUSE is enabled, you need to re-mount the JuiceFS file systemd with the `allow_other` option, for example:
+
+```sh
+$ juicefs mount -d -o allow_other redis://:6379/1 /mnt/jfs
+```
+
+## 2. Docker Volume Plugin
+
+We can also use [volume plugin](https://docs.docker.com/engine/extend/) to access JuiceFS.
+
+```sh
+$ docker plugin install juicedata/juicefs
+Plugin "juicedata/juicefs" is requesting the following privileges:
+ - network: [host]
+ - device: [/dev/fuse]
+ - capabilities: [CAP_SYS_ADMIN]
+Do you grant the above permissions? [y/N]
+
+$ docker volume create -d juicedata/juicefs:latest -o name={{VOLUME_NAME}} -o metaurl={{META_URL}} -o access-key={{ACCESS_KEY}} -o secret-key={{SECRET_KEY}} jfsvolume
+$ docker run -it -v jfsvolume:/opt busybox ls /opt
+```
+
+Replace above `{{VOLUME_NAME}}`, `{{META_URL}}`, `{{ACCESS_KEY}}`, `{{SECRET_KEY}}` to your own volume setting. For more details about JuiceFS volume plugin, refer [juicedata/docker-volume-juicefs](https://github.com/juicedata/docker-volume-juicefs) repository.
+
+## 3. Mount JuiceFS in a Container
+
+This method is to mount and use the JuiceFS storage directly in the Docker container. Compared with the first method, directly mounting JuiceFS in the container can reduce the chance of file misoperation. It also makes container management clearer and more intuitive.
+
+Since the file system mounting in the container needs to copy the JuiceFS client to the container, the process of downloading or copying the JuiceFS client and mounting the file system needs to be written into the Dockerfile, and then rebuilt the image. For example, you can refer to the following Dockerfile to package the JuiceFS client into the Alpine image.
+
+```dockerfile
+FROM alpine:latest
+LABEL maintainer="Juicedata "
+
+# Install JuiceFS client
+RUN apk add --no-cache curl && \
+ JFS_LATEST_TAG=$(curl -s https://api.github.com/repos/juicedata/juicefs/releases/latest | grep 'tag_name' | cut -d '"' -f 4 | tr -d 'v') && \
+ wget "https://github.com/juicedata/juicefs/releases/download/v${JFS_LATEST_TAG}/juicefs-${JFS_LATEST_TAG}-linux-amd64.tar.gz" && \
+ tar -zxf "juicefs-${JFS_LATEST_TAG}-linux-amd64.tar.gz" && \
+ install juicefs /usr/bin && \
+ rm juicefs "juicefs-${JFS_LATEST_TAG}-linux-amd64.tar.gz" && \
+ rm -rf /var/cache/apk/* && \
+ apk del curl
+
+ENTRYPOINT ["/usr/bin/juicefs", "mount"]
+```
+
+In addition, since the use of FUSE in the container requires corresponding permissions, when creating the container, you need to specify the `--privileged=true` option, for example:
+
+```shell
+$ sudo docker run -d --name nginx \
+ -v /mnt/jfs/html:/usr/share/nginx/html \
+ -p 8080:80 \
+ --privileged=true \
+ nginx-with-jfs
+```
diff --git a/docs/en/deployment/s3_gateway.md b/docs/en/deployment/s3_gateway.md
new file mode 100644
index 0000000..0d62dbe
--- /dev/null
+++ b/docs/en/deployment/s3_gateway.md
@@ -0,0 +1,247 @@
+---
+sidebar_label: Deploy JuiceFS S3 Gateway
+sidebar_position: 4
+slug: /s3_gateway
+---
+# Deploy JuiceFS S3 Gateway
+
+JuiceFS has introduced S3 gateway since v0.11, a feature implemented through the [MinIO S3 Gateway](https://docs.min.io/docs/minio-gateway-for-s3.html). It provides an S3-compatible RESTful API for files on JuiceFS, enabling the management of files stored on JuiceFS with tools such as s3cmd, AWS CLI, MinIO Client (mc), etc. in cases where mounting is not convenient. In addition, S3 gateway also provides a web-based file manager that allows users to use a browser to manage the files on JuiceFS.
+
+Because JuiceFS stores files in chunks, the files cannot be accessed directly using the interface of the underlying object storage. The S3 gateway provides similar access to the underlying object storage, with the following architecture diagram.
+
+![](../images/juicefs-s3-gateway-arch.png)
+
+## Prerequisites
+
+The S3 gateway is a feature built on top of the JuiceFS file system. If you do not have a JuiceFS file system, please refer to the [quick start guide](../getting-started/for_local.md) to create one first.
+
+JuiceFS S3 gateway is a feature introduced in v0.11, please make sure you have the latest version of JuiceFS.
+
+## Quickstart
+
+The S3 gateway can be enabled on the current host using the `gateway` subcommand of JuiceFS. Before enabling the feature, you need to set `MINIO_ROOT_USER` and `MINIO_ROOT_PASSWORD` environment variables, which are the Access Key and Secret Key used to authenticate when accessing the S3 API, and can be simply considered as the username and password of the S3 gateway. For example.
+
+```shell
+$ export MINIO_ROOT_USER=admin
+$ export MINIO_ROOT_PASSWORD=12345678
+$ juicefs gateway redis://localhost:6379 localhost:9000
+```
+
+Among the above three commands, the first two commands are used to set environment variables. Note that the length of `MINIO_ROOT_USER` is at least 3 characters, and the length of `MINIO_ROOT_PASSWORD` is at least 8 characters (Windows users should set the environment variable with the `set` command instead, e.g., `set MINIO_ROOT_USER=admin`).
+
+The last command is used to enable the S3 gateway. The `gateway` subcommand requires at least two parameters, the first is the URL of the database where the metadata is stored, and the second is the address and port on which the S3 gateway is listening. You can add [other options](../reference/command_reference.md#juicefs-gateway) to the `gateway` subcommand to optimize the S3 gateway as needed, for example, to set the default local cache to 20 GiB.
+
+```shell
+$ juicefs gateway --cache-size 20480 redis://localhost:6379 localhost:9000
+```
+
+In this example, we assume that the JuiceFS file system is using a local Redis database. When the S3 gateway is enabled, the administrative interface of the S3 gateway can be accessed on the **current host** using the address `http://localhost:9000`.
+
+![](../images/s3-gateway-file-manager.jpg)
+
+If you want to access the S3 gateway through other hosts on the LAN or the Internet, you need to adjust the listening address, e.g.
+
+```shell
+$ juicefs gateway redis://localhost:6379 0.0.0.0:9000
+```
+
+In this way, the S3 gateway will accept all network requests by default. S3 clients in different locations can access the S3 gateway using different addresses, e.g.
+
+- Third-party clients in the host where the S3 gateway is located can use `http://127.0.0.1:9000` or `http://localhost:9000` for access.
+- A third-party client on the same LAN as the host where the S3 gateway is located can access it using `http://192.168.1.8:9000` (assuming the intranet IP address of the S3 gateway-enabled host is 192.168.1.8).
+- The S3 gateway can be accessed via the Internet using `http://110.220.110.220:9000` (assuming that the public IP address of the S3 gateway-enabled host is 110.220.110.220).
+
+## Accessing S3 gateway
+
+The JuiceFS S3 gateway can be accessed by various clients, desktop applications, web applications, etc. that support the S3 API. Please note the address and port that the S3 gateway listens on when using it.
+
+:::tip
+The following examples are for using a third-party client to access the S3 gateway running on the local host. In specific scenarios, please adjust the address to access the S3 gateway according to the actual situation.
+:::
+
+### Using the AWS CLI
+
+```bash
+$ aws configure
+AWS Access Key ID [None]: admin
+AWS Secret Access Key [None]: 12345678
+Default region name [None]:
+Default output format [None]:
+```
+
+The program will guide you through adding the new configuration interactively, where `Access Key ID` is the same as `MINIO_ROOT_USER` and `Secret Access Key` is the same as `MINIO_ROOT_PASSWORD`, leave the region name and output format blank.
+
+After that, you can access the JuiceFS storage using the `aws s3` command, for example:
+
+```bash
+# List buckets
+$ aws --endpoint-url http://localhost:9000 s3 ls
+
+# List objects in bucket
+$ aws --endpoint-url http://localhost:9000 s3 ls s3://
+```
+
+### Using the MinIO client
+
+First install `mc` by referring to the [MinIO download page](https://min.io/download), then add a new alias:
+
+```bash
+$ mc alias set juicefs http://localhost:9000 admin 12345678 --api S3v4
+```
+
+Following the mc command format, the above command creates a configuration with the alias `juicefs`. In particular, note that the API version must be specified in the command, `-api "s3v4"`.
+
+Then, you can freely manage the copying, moving, adding and deleting of files and folders between your local disk and JuiceFS storage as well as other cloud storage via the mc client.
+
+```shell
+$ mc ls juicefs/jfs
+[2021-10-20 11:59:00 CST] 130KiB avatar-2191932_1920.png
+[2021-10-20 11:59:00 CST] 4.9KiB box-1297327.svg
+[2021-10-20 11:59:00 CST] 21KiB cloud-4273197.svg
+[2021-10-20 11:59:05 CST] 17KiB hero.svg
+[2021-10-20 11:59:06 CST] 1.7MiB hugo-rocha-qFpnvZ_j9HU-unsplash.jpg
+[2021-10-20 11:59:06 CST] 16KiB man-1352025.svg
+[2021-10-20 11:59:06 CST] 1.3MiB man-1459246.ai
+[2021-10-20 11:59:08 CST] 19KiB sign-up-accent-left.07ab168.svg
+[2021-10-20 11:59:10 CST] 11MiB work-4997565.svg
+```
+
+## Deploy JuiceFS S3 Gateway in Kubernetes
+
+### Install via kubectl
+
+Create a secret (take Amazon S3 as an example):
+
+```shell
+export NAMESPACE=default
+```
+
+```shell
+kubectl -n ${NAMESPACE} create secret generic juicefs-secret \
+ --from-literal=name= \
+ --from-literal=metaurl=redis://[:]@:6379[/] \
+ --from-literal=storage=s3 \
+ --from-literal=bucket=https://.s3..amazonaws.com \
+ --from-literal=access-key= \
+ --from-literal=secret-key=
+```
+
+- `name`: The JuiceFS file system name.
+- `metaurl`: Connection URL for metadata engine (e.g. Redis). Read [this document](../reference/how_to_setup_metadata_engine.md) for more information.
+- `storage`: Object storage type, such as `s3`, `gs`, `oss`. Read [this document](../reference/how_to_setup_object_storage.md) for the full supported list.
+- `bucket`: Bucket URL. Read [this document](../reference/how_to_setup_object_storage.md) to learn how to setup different object storage.
+- `access-key`: Access key of object storage. Read [this document](../reference/how_to_setup_object_storage.md) for more information.
+- `secret-key`: Secret key of object storage. Read [this document](../reference/how_to_setup_object_storage.md) for more information.
+
+Then download the S3 gateway [deployment YAML](https://github.com/juicedata/juicefs/blob/main/deploy/juicefs-s3-gateway.yaml) and create the `Deployment` and `Service` resources via `kubectl`. The following points require special attention:
+
+- Please replace `${NAMESPACE}` in the following command with the Kubernetes namespace of the actual S3 gateway deployment, which defaults to `kube-system`.
+- The `replicas` for `Deployment` defaults to 1, please adjust as appropriate.
+- The latest version of `juicedata/juicefs-csi-driver` image is used by default, which already integrates the latest version of JuiceFS client, please check [here](https://github.com/juicedata/juicefs-csi-driver/releases) for the specific integrated JuiceFS client version.
+- The `initContainers` of `Deployment` will first try to format the JuiceFS file system, if you have already formatted it in advance, this step will not affect the existing JuiceFS file system.
+- The default port number that the S3 gateway listens on is 9000
+- The [startup options](../reference/command_reference.md#juicefs-gateway) of S3 gateway are the default values, please adjust them according to your actual needs.
+- The value of `MINIO_ROOT_USER` environment variable is `access-key` in Secret, and the value of `MINIO_ROOT_PASSWORD` environment variable is `secret-key` in Secret.
+
+```shell
+curl -sSL https://raw.githubusercontent.com/juicedata/juicefs/main/deploy/juicefs-s3-gateway.yaml | sed "s@kube-system@${NAMESPACE}@g" | kubectl apply -f -
+```
+
+Check if it's deployed successfully:
+
+```shell
+# kubectl -n $NAMESPACE get po -o wide -l app.kubernetes.io/name=juicefs-s3-gateway
+juicefs-s3-gateway-5c7d65c77f-gj69l 1/1 Running 0 37m 10.244.2.238 kube-node-3
+# kubectl -n $NAMESPACE get svc -l app.kubernetes.io/name=juicefs-s3-gateway
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+juicefs-s3-gateway ClusterIP 10.101.108.42 9000/TCP 142m
+```
+
+You can use `juicefs-s3-gateway.${NAMESPACE}.svc.cluster.local:9000` or pod IP and port number of `juicefs-s3-gateway` (e.g. `10.244.2.238:9000`) in the application pod to access JuiceFS S3 Gateway.
+
+If you want to access through Ingress, you need to ensure that the Ingress Controller has been deployed in the cluster. Refer to [Ingress Controller Deployment Document](https://kubernetes.github.io/ingress-nginx/deploy/). Then create an `Ingress` resource:
+
+```yaml
+kubectl apply -f - <` of ingress controller as follows (no need to include the 9000 port number):
+
+```shell
+kubectl get services -n ingress-nginx
+```
+
+There are some differences between the various versions of Ingress. For more usage methods, please refer to [Ingress Controller Usage Document](https://kubernetes.github.io/ingress-nginx/user-guide/basic-usage/).
+
+### Install via Helm
+
+1. Prepare a YAML file
+
+ Create a configuration file, for example: `values.yaml`, copy and complete the following configuration information. Among them, the `secret` part is the information related to the JuiceFS file system, you can refer to [JuiceFS Quick Start Guide](../getting-started/for_local.md) for more information.
+
+ ```yaml
+ secret:
+ name: ""
+ metaurl: ""
+ storage: ""
+ accessKey: ""
+ secretKey: ""
+ bucket: ""
+ ```
+
+ If you want to deploy Ingress, add these in `values.yaml`:
+
+ ```yaml
+ ingress:
+ enables: true
+ ```
+
+2. Deploy
+
+ Execute the following three commands in sequence to deploy the JuiceFS S3 gateway via Helm (note that the following example is deployed to the `kube-system` namespace).
+
+ ```sh
+ helm repo add juicefs-s3-gateway https://juicedata.github.io/charts/
+ helm repo update
+ helm install juicefs-s3-gateway juicefs-s3-gateway/juicefs-s3-gateway -n kube-system -f ./values.yaml
+ ```
+
+3. Check the deployment
+
+ - **Check pods are running**: the deployment will launch a `Deployment` named `juicefs-s3-gateway`, so run `kubectl -n kube-system get po -l app.kubernetes.io/name=juicefs-s3-gateway` should see all running pods. For example:
+
+ ```sh
+ $ kubectl -n kube-system get po -l app.kubernetes.io/name=juicefs-s3-gateway
+ NAME READY STATUS RESTARTS AGE
+ juicefs-s3-gateway-5c69d574cc-t92b6 1/1 Running 0 136m
+ ```
+
+ - **Check Service**: run `kubectl -n kube-system get svc -l app.kubernetes.io/name=juicefs-s3-gateway` to check Service:
+
+ ```shell
+ $ kubectl -n kube-system get svc -l app.kubernetes.io/name=juicefs-s3-gateway
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ juicefs-s3-gateway ClusterIP 10.101.108.42 9000/TCP 142m
+ ```
+
+## Monitoring
+
+Please see the ["Monitoring"](../administration/monitoring.md) documentation to learn how to collect and display JuiceFS monitoring metrics.
diff --git a/docs/en/development/contributing_guide.md b/docs/en/development/contributing_guide.md
new file mode 100644
index 0000000..929dada
--- /dev/null
+++ b/docs/en/development/contributing_guide.md
@@ -0,0 +1,8 @@
+---
+sidebar_label: Contributing Guide
+sidebar_position: 1
+---
+# Contributing Guide
+
+:::note
+Working in progress.
\ No newline at end of file
diff --git a/docs/en/development/format.md b/docs/en/development/format.md
new file mode 100644
index 0000000..c32de3f
--- /dev/null
+++ b/docs/en/development/format.md
@@ -0,0 +1,8 @@
+---
+sidebar_label: Storage Format
+sidebar_position: 3
+---
+# Storage Format
+
+:::note
+Working in progress.
\ No newline at end of file
diff --git a/docs/en/development/io_processing.md b/docs/en/development/io_processing.md
new file mode 100644
index 0000000..a8b553e
--- /dev/null
+++ b/docs/en/development/io_processing.md
@@ -0,0 +1,9 @@
+---
+sidebar_label: Read/Write/Delete Processing Flow
+sidebar_position: 2
+slug: /internals/io_processing
+---
+# Read and write request processing flow
+
+Working in progress...
+
diff --git a/docs/en/development/metadata-design/_kv.md b/docs/en/development/metadata-design/_kv.md
new file mode 100644
index 0000000..f13b7d7
--- /dev/null
+++ b/docs/en/development/metadata-design/_kv.md
@@ -0,0 +1,5 @@
+---
+sidebar_label: Distributed K/V Store
+sidebar_position: 3
+---
+# Metadata Design - Distributed K/V Store
\ No newline at end of file
diff --git a/docs/en/development/metadata-design/_redis.md b/docs/en/development/metadata-design/_redis.md
new file mode 100644
index 0000000..09b9482
--- /dev/null
+++ b/docs/en/development/metadata-design/_redis.md
@@ -0,0 +1,5 @@
+---
+sidebar_label: Redis
+sidebar_position: 1
+---
+# Metadata Design - Redis
\ No newline at end of file
diff --git a/docs/en/development/metadata-design/_sql.md b/docs/en/development/metadata-design/_sql.md
new file mode 100644
index 0000000..6b412c3
--- /dev/null
+++ b/docs/en/development/metadata-design/_sql.md
@@ -0,0 +1,5 @@
+---
+sidebar_label: SQL Engine
+sidebar_position: 2
+---
+# Metadata Design - SQL Engine
\ No newline at end of file
diff --git a/docs/en/faq.md b/docs/en/faq.md
new file mode 100644
index 0000000..df77aee
--- /dev/null
+++ b/docs/en/faq.md
@@ -0,0 +1,110 @@
+# FAQ
+
+## Why doesn't JuiceFS support XXX object storage?
+
+JuiceFS already supported many object storage, please check [the list](reference/how_to_setup_object_storage.md#supported-object-storage) first. If this object storage is compatible with S3, you could treat it as S3. Otherwise, try reporting issue.
+
+## Can I use Redis cluster?
+
+The simple answer is no. JuiceFS uses [transaction](https://redis.io/topics/transactions) to guarantee the atomicity of metadata operations, which is not well supported in cluster mode. Sentinal or other HA solution for Redis are needed.
+
+See ["Redis Best Practices"](administration/metadata/redis_best_practices.md) for more information.
+
+## What's the difference between JuiceFS and XXX?
+
+See ["Comparison with Others"](comparison/juicefs_vs_alluxio.md) for more information.
+
+## How is the performance of JuiceFS?
+
+JuiceFS is a distributed file system, the latency of metedata is determined by 1 (reading) or 2 (writing) round trip(s) between client and metadata service (usually 1-3 ms). The latency of first byte is determined by the performance of underlying object storage (usually 20-100 ms). Throughput of sequential read/write could be 50MB/s - 2800MiB/s (see [fio benchmark](benchmark/fio.md) for more information), depends on network bandwidth and how the data could be compressed.
+
+JuiceFS is built with multiple layers of caching (invalidated automatically), once the caching is warmed up, the latency and throughput of JuiceFS could be close to local filesystem (having the overhead of FUSE).
+
+## Does JuiceFS support random read/write?
+
+Yes, including those issued using mmap. Currently JuiceFS is optimized for sequential reading/writing, and optimized for random reading/writing is work in progress. If you want better random reading performance, it's recommended to turn off compression ([`--compress none`](reference/command_reference.md#juicefs-format)).
+
+## When my update will be visible to other clients?
+
+All the metadata updates are immediately visible to all others. JuiceFS guarantees close-to-open consistency, see ["Consistency"](administration/cache_management.md#consistency) for more information.
+
+The new data written by `write()` will be buffered in kernel or client, visible to other processes on the same machine, not visible to other machines.
+
+Either call `fsync()`, `fdatasync()` or `close()` to force upload the data to the object storage and update the metadata, or after several seconds of automatic refresh, other clients can visit the updates. It is also the strategy adopted by the vast majority of distributed file systems.
+
+See ["Write Cache in Client"](administration/cache_management.md#write-cache-in-client) for more information.
+
+## How to copy a large number of small files into JuiceFS quickly?
+
+You could mount JuiceFS with [`--writeback` option](reference/command_reference.md#juicefs-mount), which will write the small files into local disks first, then upload them to object storage in background, this could speedup coping many small files into JuiceFS.
+
+See ["Write Cache in Client"](administration/cache_management.md#write-cache-in-client) for more information.
+
+## Can I mount JuiceFS without `root`?
+
+Yes, JuiceFS could be mounted using `juicefs` without root. The default directory for caching is `$HOME/.juicefs/cache` (macOS) or `/var/jfsCache` (Linux), you should change that to a directory which you have write permission.
+
+See ["Read Cache in Client"](administration/cache_management.md#read-cache-in-client) for more information.
+
+## How to unmount JuiceFS?
+
+Use [`juicefs umount`](reference/command_reference.md#juicefs-umount) command to unmount the volume.
+
+## How to upgrade JuiceFS client?
+
+First unmount JuiceFS volume, then re-mount the volume with newer version client.
+
+## `docker: Error response from daemon: error while creating mount source path 'XXX': mkdir XXX: file exists.`
+
+When you use [Docker bind mounts](https://docs.docker.com/storage/bind-mounts) to mount a directory on the host machine into a container, you may encounter this error. The reason is that `juicefs mount` command was executed with non-root user. In turn, Docker daemon doesn't have permission to access the directory.
+
+There are two solutions to this problem:
+
+1. Execute `juicefs mount` command with root user
+2. Modify FUSE configuration and add `allow_other` mount option, see [this document](reference/fuse_mount_options.md#allow_other) for more information.
+
+## `/go/pkg/tool/linux_amd64/link: running gcc failed: exit status 1` or `/go/pkg/tool/linux_amd64/compile: signal: killed`
+
+This error may caused by GCC version is too low, please try to upgrade your GCC to 5.4+.
+
+## `format: ERR wrong number of arguments for 'auth' command`
+
+This error means you use Redis < 6.0.0 and specify username in Redis URL when execute `juicefs format` command. Only Redis >= 6.0.0 supports specify username, so you need omit the username parameter in the URL, e.g. `redis://:password@host:6379/1`.
+
+## `fuse: fuse: exec: "/bin/fusermount": stat /bin/fusermount: no such file or directory`
+
+This error means `juicefs mount` command was executed with non-root user, and `fusermount` command cannot found.
+
+There are two solutions to this problem:
+
+1. Execute `juicefs mount` command with root user
+2. Install `fuse` package (e.g. `apt-get install fuse`, `yum install fuse`)
+
+## `fuse: fuse: fork/exec /usr/bin/fusermount: permission denied`
+
+This error means current user doesn't have permission to execute `fusermount` command. For example, check `fusermount` permission with following command:
+
+```sh
+$ ls -l /usr/bin/fusermount
+-rwsr-x---. 1 root fuse 27968 Dec 7 2011 /usr/bin/fusermount
+```
+
+Above example means only root user and `fuse` group user have executable permission. Another example:
+
+```sh
+$ ls -l /usr/bin/fusermount
+-rwsr-xr-x 1 root root 32096 Oct 30 2018 /usr/bin/fusermount
+```
+
+Above example means all users have executable permission.
+
+## Why the same user on host X has permission to access a file in JuiceFS while has no permission to it on host Y?
+
+The same user has different UID or GID on host X and host Y. Use `id` command to show the UID and GID:
+
+```bash
+$ id alice
+uid=1201(alice) gid=500(staff) groups=500(staff)
+```
+
+Read ["Sync Accounts between Multiple Hosts"](administration/sync_accounts_between_multiple_hosts.md) to resolve this problem.
diff --git a/docs/en/getting-started/_choose_metadata_engine.md b/docs/en/getting-started/_choose_metadata_engine.md
new file mode 100644
index 0000000..0f262cc
--- /dev/null
+++ b/docs/en/getting-started/_choose_metadata_engine.md
@@ -0,0 +1,6 @@
+---
+sidebar_label: How to Choose Metadata Engine
+sidebar_position: 4
+---
+
+# How to Choose Metadata Engine
\ No newline at end of file
diff --git a/docs/en/getting-started/_prerequisites.md b/docs/en/getting-started/_prerequisites.md
new file mode 100644
index 0000000..1fc42ad
--- /dev/null
+++ b/docs/en/getting-started/_prerequisites.md
@@ -0,0 +1,5 @@
+---
+sidebar_label: Prerequisites
+sidebar_position: 1
+---
+# Prerequisites
diff --git a/docs/en/getting-started/_quick_start_guide.md b/docs/en/getting-started/_quick_start_guide.md
new file mode 100644
index 0000000..29e5048
--- /dev/null
+++ b/docs/en/getting-started/_quick_start_guide.md
@@ -0,0 +1,243 @@
+---
+sidebar_label: Quick Start Guide
+sidebar_position: 2
+slug: /quick_start_guide
+---
+
+# JuiceFS Quick Start Guide
+
+To create a JuiceFS file system, you need the following 3 preparations:
+
+1. Redis database for metadata storage
+2. Object storage is used to store data blocks
+3. JuiceFS Client
+
+:::tip
+Don’t know JuiceFS? You can check first [What is JuiceFS?](../introduction/introduction.md)
+:::
+
+## 1. Redis Database
+
+You can easily buy cloud Redis databases in various configurations on the cloud computing platform, but if you just want to quickly evaluate JuiceFS, you can use Docker to quickly run a Redis database instance on your local computer:
+
+```shell
+$ sudo docker run -d --name redis \
+ -v redis-data:/data \
+ -p 6379:6379 \
+ --restart unless-stopped \
+ redis redis-server --appendonly yes
+```
+
+After the container is successfully created, you can use `redis://127.0.0.1:6379` to access the Redis database.
+
+:::info
+The above command persists Redis data in the `redis-data` data volume of docker, and you can modify the storage location of data persistence as needed.
+:::
+
+:::caution
+The Redis database instance created by the above command does not enable authentication and exposes the host's `6379` port. If you want to access this database via the Internet, it is strongly recommended to refer to [Redis official documentation](https://redis.io/topics/security) to enable protected mode.
+:::
+
+For more information about Redis database, [click here to view](../reference/how_to_setup_metadata_engine.md#redis).
+
+## 2. Object Storage
+
+Like Redis databases, almost all public cloud computing platforms provide object storage services. Because JuiceFS supports object storage services on almost all platforms, you can choose freely according to your personal preferences. You can check our [Object Storage Support List and Setting Guide](../reference/how_to_setup_object_storage.md), which lists all the object storage services currently supported by JuiceFS and how to use them.
+
+Of course, if you just want to quickly evaluate JuiceFS, you can use Docker to quickly run a MinIO object storage instance on your local computer:
+
+```shell
+$ sudo docker run -d --name minio \
+ -p 9000:9000 \
+ -p 9900:9900 \
+ -v $PWD/minio-data:/data \
+ --restart unless-stopped \
+ minio/minio server /data --console-address ":9900"
+```
+
+Then, access the service:
+
+- **MinIO Web Console**:http://127.0.0.1:9900
+- **MinIO API**:http://127.0.0.1:9000
+
+The initial Access Key and Secret Key of the root user are both `minioadmin`.
+
+After the container is successfully created, use `http://127.0.0.1:9000` to access the MinIO management interface. The initial Access Key and Secret Key of the root user are both `minioadmin`.
+
+:::info
+The latest MinIO includes a new web console, the above command sets and maps port `9900` through `--console-address ":9900"` option. In addtion, it maps the data path in the MinIO container to the `minio-data` folder in the current directory. You can modify these options as needed.
+:::
+
+## 3. JuiceFS Client
+
+JuiceFS supports Linux, Windows, macOS and other operating systems and various processor architectures. You can download the latest pre-compiled binary program from [here](https://github.com/juicedata/juicefs/releases/latest), please refer to [this document](installation.md#install-the-pre-compiled-client) to select the corresponding version according to the actual system and processor architecture used.
+
+Take the x86-based Linux system as an example, download the compressed package containing `linux-amd64` in the file name:
+
+```shell
+$ JFS_LATEST_TAG=$(curl -s https://api.github.com/repos/juicedata/juicefs/releases/latest | grep 'tag_name' | cut -d '"' -f 4 | tr -d 'v')
+$ wget "https://github.com/juicedata/juicefs/releases/download/v${JFS_LATEST_TAG}/juicefs-${JFS_LATEST_TAG}-linux-amd64.tar.gz"
+```
+
+Unzip and install:
+
+```shell
+$ tar -zxf "juicefs-${JFS_LATEST_TAG}-linux-amd64.tar.gz"
+$ sudo install juicefs /usr/local/bin
+```
+
+:::tip
+You can also compile the JuiceFS client manually from the source code, please refer to [document](installation.md#manually-compiling).
+:::
+
+## 4. Create JuiceFS file system
+
+When creating a JuiceFS file system, you need to specify both the Redis database used to store metadata and the object storage used to store actual data.
+
+The following command will create a JuiceFS file system named `pics`, use the database `1` in Redis to store metadata, and use the `pics` bucket created in MinIO to store actual data:
+
+```shell
+$ juicefs format \
+ --storage minio \
+ --bucket http://127.0.0.1:9000/pics \
+ --access-key minioadmin \
+ --secret-key minioadmin \
+ redis://127.0.0.1:6379/1 \
+ pics
+```
+
+After executing the command, you will see output similar to the following, indicating that the JuiceFS file system was created successfully.
+
+```shell
+2021/04/29 23:01:18.352256 juicefs[34223] : Meta address: redis://127.0.0.1:6379/1
+2021/04/29 23:01:18.354252 juicefs[34223] : Ping redis: 132.185µs
+2021/04/29 23:01:18.354758 juicefs[34223] : Data use minio://127.0.0.1:9000/pics/pics/
+2021/04/29 23:01:18.361674 juicefs[34223] : Volume is formatted as {Name:pics UUID:9c0fab76-efd0-43fd-a81e-ae0916e2fc90 Storage:minio Bucket:http://127.0.0.1:9000/pics AccessKey:minioadmin SecretKey:removed BlockSize:4096 Compression:none Partitions:0 EncryptKey:}
+```
+
+:::info
+You can create as many JuiceFS file systems as you need. But it should be noted that only one file system can be created in each Redis database. For example, when you want to create another file system named `memory`, you have to use another database in Redis, such as No.2, which is `redis://127.0.0.1:6379/2`.
+:::
+
+:::info
+If you don't specify `--storage` option, the JuiceFS client will use the local disk as data storage. When using local storage, JuiceFS can only be used on a local stand-alone machine and cannot be mounted by other clients in the network. [Click here](../reference/how_to_setup_object_storage.md#local-disk) for details.
+:::
+
+## 5. Mount JuiceFS file system
+
+After the JuiceFS file system is created, you can mount it on the operating system and use it. The following command mounts the `pics` file system to the `/mnt/jfs` directory.
+
+```shell
+$ sudo juicefs mount -d redis://127.0.0.1:6379/1 /mnt/jfs
+```
+
+:::tip
+When mounting the JuiceFS file system, there is no need to explicitly specify the name of the file system, just fill in the correct Redis server address and database number.
+:::
+
+After executing the command, you will see output similar to the following, indicating that the JuiceFS file system has been successfully mounted on the system.
+
+```shell
+2021/04/29 23:22:25.838419 juicefs[37999] : Meta address: redis://127.0.0.1:6379/1
+2021/04/29 23:22:25.839184 juicefs[37999] : Ping redis: 67.625µs
+2021/04/29 23:22:25.839399 juicefs[37999] : Data use minio://127.0.0.1:9000/pics/pics/
+2021/04/29 23:22:25.839554 juicefs[37999] : Cache: /var/jfsCache/9c0fab76-efd0-43fd-a81e-ae0916e2fc90 capacity: 1024 MB
+2021/04/29 23:22:26.340509 juicefs[37999] : OK, pics is ready at /mnt/jfs
+```
+
+After the mounting is complete, you can access files in the `/mnt/jfs` directory. You can execute the `df` command to view the JuiceFS file system's mounting status:
+
+```shell
+$ df -Th
+Filesystem Type Size Used Avail Use% Mounted on
+JuiceFS:pics fuse.juicefs 1.0P 64K 1.0P 1% /mnt/jfs
+```
+
+:::info
+By default, the cache of JuiceFS is located in the `/var/jfsCache` directory. In order to obtain the read and write permissions of this directory, the `sudo` command is used here to mount the JuiceFS file system with administrator privileges. When ordinary users read and write `/mnt/jfs`, please assign them the appropriate permissions.
+:::
+
+## 6. Automatically mount JuiceFS on boot
+
+Rename the `juicefs` client to `mount.juicefs` and copy it to the `/sbin/` directory:
+
+```shell
+$ sudo cp /usr/local/bin/juicefs /sbin/mount.juicefs
+```
+
+:::info
+Before executing the above command, we assume that the `juicefs` client program is already in the `/usr/local/bin` directory. You can also unzip a copy of the `juicefs` program directly from the downloaded compression package, rename it according to the above requirements, and copy it to the `/sbin/` directory.
+:::
+
+Edit the `/etc/fstab` configuration file, start a new line, and add a record according to the following format:
+
+```
+ juicefs _netdev[,] 0 0
+```
+
+- Please replace `` with the actual Redis database address in the format of `redis://:@:/`, for example: `redis ://localhost:6379/1`.
+- Please replace `` with the actual mount point of the file system, for example: `/jfs`.
+- If necessary, please replace `[,]` with the actual [mount options](../reference/command_reference.md#juicefs-mount) to be set, and multiple options are separated by commas.
+
+For example:
+
+```
+redis://localhost:6379/1 /jfs juicefs _netdev,max-uploads=50,writeback,cache-size=2048 0 0
+```
+
+:::caution
+By default, CentOS 6 will not mount the network file system when the system starts. You need to execute the command to enable the automatic mounting support of the network file system:
+:::
+
+```bash
+$ sudo chkconfig --add netfs
+```
+
+## 7. Unmount JuiceFS
+
+If you need to unmount a JuiceFS file system, you can first execute the `df` command to view the information of the mounted file systems:
+
+```shell
+$ sudo df -Th
+
+File system type capacity used usable used% mount point
+...
+JuiceFS:pics fuse.juicefs 1.0P 1.1G 1.0P 1% /mnt/jfs
+```
+
+You can see that the mount point of the file system `pics` is `/mnt/jfs`, execute the `umount` subcommand:
+
+```shell
+$ sudo juicefs umount /mnt/jfs
+```
+
+:::tip
+Execute the `juicefs umount -h` command to obtain detailed help information for the unmount command.
+:::
+
+### Unmount failed
+
+If a file system fails to be unmounted after executing the command, it will prompt `Device or resource busy`:
+
+```shell
+2021-05-09 22:42:55.757097 I | fusermount: failed to unmount /mnt/jfs: Device or resource busy
+exit status 1
+```
+
+This can happen because some programs are reading and writing files in the file system. To ensure data security, you should first check which programs are interacting with files in the file system (e.g. through the `lsof` command), and try to end the interaction between them, and then execute the uninstall command again.
+
+:::caution
+The commands contained in the following content may cause files damage or loss, please be cautious!
+:::
+
+Of course, you can also add the `--force` or `-f` parameter to the unmount command to force the file system to be unmounted, but you have to bear the possible catastrophic consequences:
+
+```shell
+$ sudo juicefs umount --force /mnt/jfs
+```
+
+You can also use the `fusermount` command to unmount the file system:
+
+```shell
+$ sudo fusermount -u /mnt/jfs
+```
diff --git a/docs/en/getting-started/for_distributed.md b/docs/en/getting-started/for_distributed.md
new file mode 100644
index 0000000..8897ae7
--- /dev/null
+++ b/docs/en/getting-started/for_distributed.md
@@ -0,0 +1,180 @@
+---
+sidebar_label: Quick Start (Distributed Mode)
+sidebar_position: 3
+---
+
+# JuiceFS Quick Start Guide for Distributed Mode
+
+The previous document ["JuiceFS Quick Start Guide for Standalone Mode "](for_local.md) created a file system that can be mounted on any host by using a combination of an "object store" and a "SQLite" database. Thanks to the feature that the object store is accessible by any computer with privileges on the network, we can access the same JuiceFS file system on different computers by simply copying the SQLite database file to any computer that wants to access the store.
+
+Obviously, it is feasible to share the file system by copying the SQLite database between computers, but the real-time availability of the files is not guaranteed. Since SQLite is a single file database that cannot be accessed by multiple computers at the same time, we need to use a database that supports network access, such as Redis, PostgreSQL, MySQL, etc., in order to allow a file system to be mounted and read by multiple computers in a distributed environment.
+
+In this document, based on the previous document, we further replace the database from a single-user "SQLite" to a multi-user "cloud database", thus realizing a distributed file system that can be mounted on any computer on the network for reading and writing.
+
+## Network Database
+
+The meaning of "Network Database" here refers to a database that allows multiple users to access it simultaneously over the network. From this perspective, the database can be simply divided into:
+
+1. **Standalone Database**: such databases are single file and usually only accessible on a single machine, such as SQLite, Microsoft Access, etc.
+2. **Network Database**: such databases are usually complex multi-file structures that provide network-based access interfaces and support simultaneous multi-user access, such as Redis, PostgreSQL, etc.
+
+JuiceFS currently supports the following network-based databases.
+
+- **Key-Value Database**: Redis, TiKV
+- **Relational Database**: PostgreSQL, MySQL, MariaDB
+
+Different databases have different performance and stability, for example, Redis is an in-memory key-value database with excellent performance but relatively weak reliability, and PostgreSQL is a relational database with less performance than in-memory, but it is more reliable.
+
+We will write a special document about database selection.
+
+## Cloud Database
+
+Cloud computing platforms usually have a wide variety of cloud database offerings, such as Amazon RDS for various relational database versions and Amazon ElastiCache for Redis-compatible in-memory database products. A multi-copy, highly available database cluster can be created with a simple initial setup.
+
+Of course, you can build your own database on the server if you wish.
+
+For simplicity, here is an example of the Amazon ElastiCache for Redis. For a network database, the most basic information is the following 2 items.
+
+1. **Database Address**: the access address of the database, the cloud platform may provide different links for internal and external networks.
+2. **Username and Password**: authentication information used to access the database.
+
+## Hands-on Practice
+
+### 1. Install Client
+
+Install the JuiceFS client on all computers that need to mount the file system, refer to [Installation & Upgrade](installation.md) for details.
+
+### 2. Preparing Object Storage
+
+Here is a pseudo-sample with Amazon S3 as an example, you can switch to other object storage, refer to [JuiceFS Supported Storage](../reference/how_to_setup_object_storage.md#supported-object-storage) for details.
+
+- **Bucket Endpoint**: `https://myjfs.s3.us-west-1.amazonaws.com`
+- **Access Key ID**: `ABCDEFGHIJKLMNopqXYZ`
+- **Access Key Secret**: `ZYXwvutsrqpoNMLkJiHgfeDCBA`
+
+### 3. Preparing Database
+
+The following is a pseudo-sample of the Amazon ElastiCache for Redis as an example, you can switch to other types of databases, refer to [JuiceFS Supported Databases](../reference/how_to_setup_metadata_engine.md) for details.
+
+- **Database Address**: `myjfs-sh-abc.apse1.cache.amazonaws.com:6379`
+- **Database Username**: `tom`
+- **Database Password**: `mypassword`
+
+The format for using a Redis database in JuiceFS is as follows.
+
+```
+redis://:@:6379/1
+```
+
+:::tip
+Redis versions prior to 6.0 do not have username, omit the `` part of the URL, e.g. `redis://:mypassword@myjfs-sh-abc.apse1.cache.amazonaws.com:6379/1` (please note that the colon in front of the password is a separator and needs to be preserved).
+:::
+
+### 4. Creating a file system
+
+The following command creates a file system that supports cross-network, multi-machine simultaneous mounts, and shared reads and writes using a combination of "Object Storage" and "Redis" database.
+
+```shell
+juicefs format \
+ --storage s3 \
+ --bucket https://myjfs.s3.us-west-1.amazonaws.com \
+ --access-key ABCDEFGHIJKLMNopqXYZ \
+ --secret-key ZYXwvutsrqpoNMLkJiHgfeDCBA \
+ redis://tom:mypassword@myjfs-sh-abc.apse1.cache.amazonaws.com:6379/1 \
+ myjfs
+```
+
+Once the file system is created, the terminal will output something like the following.
+
+```shell
+2021/12/16 16:37:14.264445 juicefs[22290] : Meta address: redis://@myjfs-sh-abc.apse1.cache.amazonaws.com:6379/1
+2021/12/16 16:37:14.277632 juicefs[22290] : maxmemory_policy is "volatile-lru", please set it to 'noeviction'.
+2021/12/16 16:37:14.281432 juicefs[22290] : Ping redis: 3.609453ms
+2021/12/16 16:37:14.527879 juicefs[22290] : Data uses s3://myjfs/myjfs/
+2021/12/16 16:37:14.593450 juicefs[22290] : Volume is formatted as {Name:myjfs UUID:4ad0bb86-6ef5-4861-9ce2-a16ac5dea81b Storage:s3 Bucket:https://myjfs AccessKey:ABCDEFGHIJKLMNopqXYZ SecretKey:removed BlockSize:4096 Compression:none Shards:0 Partitions:0 Capacity:0 Inodes:0 EncryptKey:}
+```
+
+:::info
+Once a file system is created, the relevant information including name, object storage, access keys, etc. are recorded in the database. In the current example, the file system information is recorded in the Redis database, so any computer with the database address, username, and password information can mount and read the file system.
+:::
+
+### 5. Mounting the file system
+
+Since the "data" and "metadata" of this file system are stored in cloud services, it can be mounted on any computer with a JuiceFS client installed for shared reads and writes at the same time. For example:
+
+```shell
+juicefs mount redis://tom:mypassword@myjfs-sh-abc.apse1.cache.amazonaws.com:6379/1 mnt
+```
+
+#### Strong data consistency guarantee
+
+JuiceFS provides a "close-to-open" consistency guarantee, which means that when two or more clients read and write the same file at the same time, the changes made by client A may not be immediately visible to client B. However, once the file is closed by client A, any client re-opened it afterwards is guaranteed to see the latest data, no matter it is on the same node with A or not.
+
+#### Increase cache size to improve performance
+
+Since Object Storage is a network-based storage service, it will inevitably encounter access latency. To solve this problem, JuiceFS provides and enables caching mechanism by default, i.e. allocating a part of local storage as a buffer layer between data and object storage, and caching data to local storage asynchronously when reading files, please refer to ["Cache"](../administration/cache_management.md) for more details.
+
+By default, JuiceFS will set 100GiB cache in `$HOME/.juicefs/cache` or `/var/jfsCache` directory. Setting a larger cache space on a faster SSD can effectively improve JuiceFS's read and write performance.
+
+You can use `--cache-dir` to adjust the location of the cache directory and `--cache-size` to adjust the size of the cache space, e.g.:
+
+```shell
+juicefs mount
+ --background \
+ --cache-dir /mycache \
+ --cache-size 512000 \
+ redis://tom:mypassword@myjfs-sh-abc.apse1.cache.amazonaws.com:6379/1 mnt
+```
+
+:::note
+The JuiceFS process needs permission to read and write to the `--cache-dir` directory.
+:::
+
+The above command sets the cache directory in the `/mycache` directory and specifies the cache space as 500GiB.
+
+#### Auto-mount on boot
+
+Take a Linux system as an example and assume that the client is located in the `/usr/local/bin` directory. Rename the JuiceFS client to `mount.juicefs` and copy it to the `/sbin` directory.
+
+```shell
+sudo cp /usr/local/bin/juicefs /sbin/mount.juicefs
+```
+
+Edit the `/etc/fstab` configuration file and add a new record following the rules of fstab.
+
+```
+redis://tom:mypassword@myjfs-sh-abc.apse1.cache.amazonaws.com:6379/1 /mnt/myjfs juicefs _netdev,max-uploads=50,writeback,cache-size=512000 0 0
+```
+
+:::note
+By default, CentOS 6 does not mount the network file system at boot time, you need to run the command to enable automatic mounting support for the network file system: `sudo chkconfig --add netfs`
+:::
+
+### 6. Unmounting the file system
+
+You can unmount the JuiceFS file system (assuming the mount point path is `mnt`) with the `juicefs umount` command.
+
+```shell
+juicefs umount mnt
+```
+
+#### Unmounting failure
+
+If the command fails to unmount the file system after execution, the prompt is `Device or resource busy`.
+
+```shell
+2021-05-09 22:42:55.757097 I | fusermount: failed to unmount mnt: Device or resource busy
+exit status 1
+```
+
+This may happen because some programs are reading and writing files in the file system. To ensure data security, you should first troubleshoot which programs are interacting with files on the file system (e.g. via the `lsof` command) and try to end the interaction between them before re-executing the unmount command.
+
+:::caution
+The following commands may result in file corruption and loss, so be careful!
+:::
+
+While you can ensure data security, you can add the `--force` or `-f` parameter to the unmount command to force the file system to be unmounted.
+
+```shell
+juicefs umount --force mnt
+```
diff --git a/docs/en/getting-started/for_local.md b/docs/en/getting-started/for_local.md
new file mode 100644
index 0000000..3a54609
--- /dev/null
+++ b/docs/en/getting-started/for_local.md
@@ -0,0 +1,175 @@
+---
+sidebar_label: Quick Start (Standalone Mode)
+sidebar_position: 2
+slug: /quick_start_guide
+---
+
+# JuiceFS Quick Start Guide for Standalone Mode
+
+The JuiceFS file system consists of ["Object Storage"](../reference/how_to_setup_object_storage.md) and ["Database"](../reference/how_to_setup_object_storage.md) are jointly driven. In addition to object storage, it also supports the use of local disk, WebDAV and HDFS, and so on as the underlying storage. Therefore, you can quickly create a standalone file system using local disks and SQLite database to understand and experience JuiceFS.
+
+## Install Client
+
+For details, please refer to [Installation & Upgrade](installation.md)。
+
+Regardless of the operating system you are using, when you execute `juicefs` in the terminal and it returns the help message of the program, it means that you have successfully installed the JuiceFS client.
+
+## Creating a File System
+
+### Basic Concept
+
+To create a file system use the [`format`](../reference/command_reference.md#juicefs-format) command provided by the client, generally in the following format.
+
+```shell
+juicefs format [command options] META-URL NAME
+```
+
+As you can see, there are 3 types of information required to format a file system.
+
+1. **[command options]**: Sets the storage media for the file system, if nothing is specified then **default to local disk** as the storage media, default path is `"$HOME/.juicefs/local"` or `"/var/jfs"`.
+2. **META-URL**: used to set the metadata engine, usually the URL or file path to the database.
+3. **NAME**: the name of the file system.
+
+:::tip
+JuiceFS supports a wide range of storage media and metadata storage engines, see [JuiceFS supported storage medias](../reference/how_to_setup_object_storage.md) and [JuiceFS supported metadata storage engines](../reference/how_to_setup_metadata_engine.md)。
+:::
+
+### Hands-on Practice
+
+On a Linux system, for example, the following command creates a file system named `myjfs`.
+
+```shell
+juicefs format sqlite3://myjfs.db myjfs
+```
+
+Completion of the creation will return an output similar to the following.
+
+```shell {1,4}
+2021/12/14 18:26:37.666618 juicefs[40362] : Meta address: sqlite3://myjfs.db
+[xorm] [info] 2021/12/14 18:26:37.667504 PING DATABASE sqlite3
+2021/12/14 18:26:37.674147 juicefs[40362] : The latency to database is too high: 7.257333ms
+2021/12/14 18:26:37.675713 juicefs[40362] : Data use file:///Users/herald/.juicefs/local/myjfs/
+2021/12/14 18:26:37.689683 juicefs[40362] : Volume is formatted as {Name:myjfs UUID:d5bdf7ea-472c-4640-98a6-6f56aea13982 Storage:file Bucket:/Users/herald/.juicefs/local/ AccessKey: SecretKey: BlockSize:4096 Compression:none Shards:0 Partitions:0 Capacity:0 Inodes:0 EncryptKey:}
+```
+
+As you can see from the output, the file system uses SQLite as the metadata storage engine and the database file is located in the current directory with the file name `myjfs.db`, which stores all the information of the `myjfs` file system. It has been constructed with a complete table structure that will be used as a storage for all the meta information of the data.
+
+![](../images/sqlite-info.png)
+
+Since no storage-related options are specified, the client uses the local disk as the storage medium by default. According to the output, the file system storage path is `file:///Users/herald/.juicefs/local/myjfs/`.
+
+## Mounting the File System
+
+### Basic Concept
+
+To mount a file system use the client-provided [`mount`](../reference/command_reference.md#juicefs-mount) command, generally in the following format
+
+```shell
+juicefs mount [command options] META-URL MOUNTPOINT
+```
+
+Similar to the command to create a file system, the following information is required to mount a file system.
+
+1. **[command options]**: used to specify file system-related options, e.g. `-d` enables background mounts.
+2. **META-URL**: used to set up metadata storage, usually the URL or file path to the database.
+3. **MOUNTPOINT**: the name of the file system.
+
+:::tip
+The mount point (MOUNTPOINT) on Windows systems should use a disk letter that is not yet occupied, e.g. `Z:`, `Y:`.
+:::
+
+### Hands-on Practice
+
+:::note
+As SQLite is a single file database, you should pay attention to the path of the database file when mounting it, JuiceFS supports both relative and absolute paths.
+:::
+
+The following command mounts the `myjfs` file system to the `mnt` folder in the current directory.
+
+```shell
+juicefs mount sqlite3://myjfs.db mnt
+```
+
+![](../images/sqlite-mount-local.png)
+
+By default, the client mounts the file system in the foreground. As you can see in the image above, the program will always run in the current terminal process, and the file system will be unmounted using the Ctrl + C key combination or by closing the terminal window.
+
+In order to allow the file system to remain mounted in the background, you can specify the `-d` or `--background` option when mounting, i.e. to allow the client to mount the file system in the daemon.
+
+```shell
+juicefs mount sqlite3://myjfs.db mnt -d
+```
+
+Next, any files stored on mount point `mnt` will be split into specific blocks according to [How JuiceFS Stores Files](../introduction/architecture.md#how-juicefs-stores-files) and stored in `$HOME/.juicefs/local/myjfs` directory, and the corresponding metadata will be stored in the `myjfs.db` database.
+
+Finally, the mount point `mnt` can be unmounted by executing the following command.
+
+```shell
+juicefs umount mnt
+```
+
+## Go Further
+
+The previous content is only suitable to quickly help you experience and understand how JucieFS works. We can take the previous content a step further by still using SQLite to store metadata and replace the local storage with "object storage" for a more useful solution.
+
+### Object Storage
+
+Object Storage is a web storage service based on the HTTP protocol that offers simple API for access. It has a flat structure, easy to scale, relatively inexpensive, and is ideal for storing large amounts of unstructured data. Almost all major cloud computing platforms provide object storage services, such as Amazon S3, Alibaba Cloud OSS, Backblaze B2, etc.
+
+JuiceFS supports almost all object storage services, see [JuiceFS supported storage medias](../reference/how_to_setup_object_storage.md).
+
+In general, creating an object store usually requires only 2 parts.
+
+1. Create a `Bucket` and get the Endpoint address.
+2. Create the `Access Key ID` and `Access Key Secret`, the access keys for the Object Storage API.
+
+Using AWS S3 as an example, a created resource would look something like the following.
+
+- **Bucket Endpoint**: `https://myjfs.s3.us-west-1.amazonaws.com`
+- **Access Key ID**: `ABCDEFGHIJKLMNopqXYZ`
+- **Access Key Secret**: `ZYXwvutsrqpoNMLkJiHgfeDCBA`
+
+:::note
+The process of creating an object store may vary slightly from platform to platform, so it is recommended to check the help manual of the cloud platform. In addition, some platforms may provide different Endpoint addresses for internal and external networks, so please choose to use the address for external network access since this article is to access the object store from local.
+:::
+
+### Hands-on Practice
+
+Next, create a JuiceFS file system using SQLite and Amazon S3 object storage.
+
+:::note
+If the `myjfs.db` file already exists, delete it first and then execute the following command.
+:::
+
+```shell
+juicefs format --storage oss \
+ --bucket https://myjfs.s3.us-west-1.amazonaws.com \
+ --access-key ABCDEFGHIJKLMNopqXYZ \
+ --secret-key ZYXwvutsrqpoNMLkJiHgfeDCBA \
+ sqlite3://myjfs.db myjfs
+```
+
+In the above command, the database and file system names remain the same and the following information related to object storage is added.
+
+- `--storage`: Used to set the storage type, e.g. oss, s3, etc.
+- `--bucket`: Used to set the Endpoint address of the object store.
+- `--access-key`: Used to set the Object Storage Access Key ID.
+- `--secret-key`: Used to set the Object Storage Access Key Secret.
+
+:::note
+Please replace the information in the above command with your own object storage information.
+:::
+
+Once created, you can mount it.
+
+```shell
+juicefs mount sqlite3://myjfs.db mnt
+```
+
+As you can see, the mount command is exactly the same as when using local storage, because JuiceFS has already written the information about the object storage to the `myjfs.db` database, so there is no need to provide it again when mounting.
+
+The combination of SQLite and object storage has a higher utility value than using local disks. From an application perspective, this type of approach is equivalent to plugging an object storage with almost unlimited capacity into your local computer, allowing you to use cloud storage as a local disk.
+
+Further, all the data of the file system is stored in the cloud-based object storage, so the `myjfs.db` database can be copied to other computers where JuiceFS clients are installed for mounting, reading and writing. That is, any computer that can read the database with the metadata stored on it can mount and read/write to the file system.
+
+Obviously, it is difficult for a single file database like SQLite to be accessed by multiple computers at the same time. If SQLite is replaced by Redis, PostgreSQL, MySQL, etc., which can be accessed by multiple computers at the same time through the network, then it is possible to achieve distributed read and write on the JuiceFS file system.
diff --git a/docs/en/getting-started/installation.md b/docs/en/getting-started/installation.md
new file mode 100644
index 0000000..9b86276
--- /dev/null
+++ b/docs/en/getting-started/installation.md
@@ -0,0 +1,281 @@
+---
+sidebar_label: Installation & Upgrade
+sidebar_position: 1
+slug: /installation
+---
+
+# Installation & Upgrade
+
+JuiceFS has good cross-platform capability and supports running on all kinds of operating systems of almost all major architectures, including and not limited to Linux, macOS, Windows, BSD, etc.
+
+The JuiceFS client has only one binary file, you can download the pre-compiled version to unzip it and use it directly, or you can compile it manually with the source code.
+
+## Install The Pre-compiled Client
+
+You can find the latest version of the client for download at [GitHub](https://github.com/juicedata/juicefs/releases). Pre-compiled versions for different CPU architectures and operating systems are available in the download list for each version, so please take care to identify your choice, e.g.
+
+| File Name | Description |
+| ------------------------------------ | ---------------------------- |
+| `juicefs-x.x.x-darwin-amd64.tar.gz` | For macOS systems with Intel chips |
+| `juicefs-x.x.x-linux-amd64.tar.gz` | For Linux distributions on the x86 architecture |
+| `juicefs-x.x.x-linux-arm64.tar.gz` | For Linux distributions on the ARM architecture |
+| `juicefs-x.x.x-windows-amd64.tar.gz` | For Windows on the x86 architecture |
+| `juicefs-hadoop-x.x.x-linux-amd64.jar` | Hadoop Java SDK for Linux distributions on the x86 architecture |
+
+:::tip
+For macOS on M1 series chips, you can use the `darwin-amd64` version of the client dependent on [Rosetta 2](https://support.apple.com/zh-cn/HT211861), or you can refer to [Manually Compiling](#manually-compiling) to compile the native version.
+:::
+
+### Linux
+
+For Linux systems with x86 architecture, download the file with the file name `linux-amd64` and execute the following command in the terminal.
+
+1. Get the latest version number
+
+ ```shell
+ JFS_LATEST_TAG=$(curl -s https://api.github.com/repos/juicedata/juicefs/releases/latest | grep 'tag_name' | cut -d '"' -f 4 | tr -d 'v')
+ ```
+
+2. Download the client to the current directory
+
+ ```shell
+ wget "https://github.com/juicedata/juicefs/releases/download/v${JFS_LATEST_TAG}/juicefs-${JFS_LATEST_TAG}-linux-amd64.tar.gz"
+ ```
+
+3. Unzip the installation package
+
+ ```shell
+ tar -zxf "juicefs-${JFS_LATEST_TAG}-linux-amd64.tar.gz"
+ ```
+
+4. Install the client
+
+ ```shell
+ sudo install juicefs /usr/local/bin
+ ```
+
+After completing the above 4 steps, execute the `juicefs` command in the terminal and the help message will be returned, then the client installation is successful.
+
+:::info
+If the terminal prompts `command not found`, it may be that `/usr/local/bin` is not in your system's `PATH` environment variable. You can run `echo $PATH` to see which executable paths are set, select an appropriate path based on the return result, adjust and re-execute the installation command in step 4.
+:::
+
+### Windows
+
+There are two ways to use JuiceFS on Windows systems.
+
+1. [Using Pre-compiled Windows client](#pre-compiled-windows-client)
+2. [Using the Linux client in WSL](#using-the-linux-client-in-wsl)
+
+#### Pre-compiled Windows Client
+
+The Windows client of JuiceFS is also a standalone binary that can be downloaded and unpacked to run directly.
+
+1. Installing Dependencies
+
+ Since Windows does not natively support the FUSE interface, you first need to download and install [WinFsp](http://www.secfs.net/winfsp/) in order to implement FUSE support.
+
+ :::tip
+ **[WinFsp](https://github.com/billziss-gh/winfsp)** is an open source Windows file system agent that provides a FUSE emulation layer that allows JuiceFS clients to mount file systems for use on Windows systems.
+ :::
+
+2. Install the client
+
+ Take Windows 10 system as an example, download the file with the filename `windows-amd64`, unzip it and get `juicefs.exe` which is the JuiceFS client binary.
+
+ To make it easier to use, you can create a folder named `juicefs` in the root directory of the `C:\` disk, and extract `juicefs.exe` to that folder. Then add `C:\juicefs` to the environment variables of your system, restart the system to let the settings take effect, and then you can run `juicefs` commands directly using the `Command Prompt` or `PowerShell` terminal that come with your system.
+
+ ![Windows ENV path](../images/windows-path-en.png)
+
+#### Using the Linux client in WSL
+
+[WSL](https://docs.microsoft.com/en-us/windows/wsl/about) is the full name of Windows Subsystem for Linux, which is supported from Windows 10 version 2004 onwards or Windows 11. It allows you to run most of the command-line tools, utilities, and applications of GNU/Linux natively on a Windows system without incurring the overhead of a traditional virtual machine or dual-boot setup.
+
+For details, see "[Using JuiceFS on WSL](../tutorials/juicefs_on_wsl.md)"
+
+### macOS
+
+Since macOS does not support the FUSE interface by default, you need to install [macFUSE](https://osxfuse.github.io/) first to implement support for FUSE.
+
+:::tip
+[macFUSE](https://github.com/osxfuse/osxfuse) is an open source file system enhancement tool that allows macOS to mount third-party file systems, enabling JuiceFS clients to mount file systems for use on macOS systems.
+:::
+
+#### Homebrew
+
+If you have the [Homebrew](https://brew.sh/) package manager installed on your system, you can install the JuiceFS client by executing the following command.
+
+```shell
+brew tap juicedata/homebrew-tap
+brew install juicefs
+```
+
+#### Pre-compiled Binary
+
+You can also download the binary with the filename of `darwin-amd64`, unzip it and install the program to any executable path on your system using the `install` command, e.g.
+
+```shell
+sudo install juicefs /usr/local/bin
+```
+
+### Docker
+
+For cases where you want to use JuiceFS in a Docker container, here is a `Dockerfile` for building a JuiceFS client image, which can be used as a base to build a JuiceFS client image alone or packaged together with other applications.
+
+```dockerfile
+FROM ubuntu:20.04
+
+RUN apt update && apt install -y curl fuse && \
+ apt-get autoremove && \
+ apt-get clean && \
+ rm -rf \
+ /tmp/* \
+ /var/lib/apt/lists/* \
+ /var/tmp/*
+
+RUN set -x && \
+ mkdir /juicefs && \
+ cd /juicefs && \
+ JFS_LATEST_TAG=$(curl -s https://api.github.com/repos/juicedata/juicefs/releases/latest | grep 'tag_name' | cut -d '"' -f 4 | tr -d 'v') && \
+ curl -s -L "https://github.com/juicedata/juicefs/releases/download/v${JFS_LATEST_TAG}/juicefs-${JFS_LATEST_TAG}-linux-amd64.tar.gz" \
+ | tar -zx && \
+ install juicefs /usr/bin && \
+ cd .. && \
+ rm -rf /juicefs
+
+CMD [ "juicefs" ]
+```
+
+## Manually Compiling
+
+If the pre-compiled client does not have a version for you, such as FreeBSD or macOS on the M1 chip, then you can use manual compilation to compile the JuiceFS client for you.
+
+In addition, manually compiling the client will give you priority access to various new features in JuiceFS development, but it requires some basic knowledge of software compilation.
+
+### Unix-like Client
+
+Compiling clients for Linux, macOS, BSD and other Unix-like systems requires the following dependencies:
+
+- [Go](https://golang.org) 1.16+
+- GCC 5.4+
+
+1. Cloning source code
+
+ ```shell
+ git clone https://github.com/juicedata/juicefs.git
+ ```
+
+2. Enter the source code directory
+
+ ```shell
+ cd juicefs
+ ```
+
+3. Switching the branch
+
+ The source code uses the `main` branch by default, and you can switch to any official release, for example to `v0.17.4`.
+
+ ```shell
+ git checkout v0.17.4
+ ```
+
+ :::caution
+ The development branch often involves large changes, so please do not use the clients compiled in the "development branch" for the production environment.
+ :::
+
+4. Compiling
+
+ ```shell
+ make
+ ```
+
+ The compiled `juicefs` binary is located in the current directory.
+
+### Compiling on Windows
+
+To compile the JuiceFS client on Windows, you need to install [Go](https://golang.org) 1.16+ and GCC 5.4+.
+
+Since GCC does not have a native Windows client, you need to use the version provided by a third party, either [MinGW-w64](https://sourceforge.net/projects/mingw-w64/) or [Cygwin](https://www.cygwin.com/). Here is the example of MinGW-w64.
+
+Download MinGW-w64 and add its `bin` directory to the system environment variables.
+
+1. Clone and enter the project directory at:
+
+ ```shell
+ git clone https://github.com/juicedata/juicefs.git && cd juicefs
+ ```
+
+2. Copy winfsp headers
+
+ ```shell
+ mkdir "C:\WinFsp\inc\fuse"
+ ```
+
+ ```shell
+ copy .\hack\winfsp_headers\* C:\WinFsp\inc\fuse\
+ ```
+
+ ```shell
+ dir "C:\WinFsp\inc\fuse"
+ ```
+
+ ```shell
+ set CGO_CFLAGS=-IC:/WinFsp/inc/fuse
+ ```
+
+ ```shell
+ go env -w CGO_CFLAGS=-IC:/WinFsp/inc/fuse
+ ```
+
+3. Compile client
+
+ ```shell
+ go build -ldflags="-s -w" -o juicefs.exe ./cmd
+ ```
+
+### Cross-compiling Windows clients in Linux
+
+Compiling a specific version of the client for Windows is essentially the same as [Unix-like Client](#unix-like-client) and can be done directly on a Linux system, but in addition to `go` and `gcc`, which must be installed, you also need to install:
+
+- [mingw-w64](https://www.mingw-w64.org/downloads/)
+
+Just install the latest version provided by the Linux distribution package manager, e.g. Ubuntu 20.04+ can be installed directly as follows.
+
+```shell
+sudo apt install mingw-w64
+```
+
+Compile the Windows client:
+
+```shell
+make juicefs.exe
+```
+
+The compiled client is a binary file named `juicefs.exe`, located in the current directory.
+
+## Upgrade
+
+The JuiceFS client has only one binary file, so to upgrade the new version you only need to replace the old one with the new one.
+
+- **Use pre-compiled client**: You can refer to the installation method of the corresponding system in this document, download the latest client, and overwrite the old one.
+- **Manually compile the client**: You can pull the latest source code and recompile it to overwrite the old version of the client.
+
+:::caution
+For the file system that has been mounted using the old version of JuiceFS client, you need to [unmount file system](for_distributed.md#6-unmounting-the-file-system), and then re-mount it with the new version of JuiceFS client.
+:::
+
+## Uninstall
+
+The JuiceFS client has only one binary file, which can be deleted by simply finding the location of the program. For example, referring to the client installed on the Linux system in this document, execute the following command to uninstall the client.
+
+```shell
+sudo rm /usr/local/bin/juicefs
+```
+
+You can also see where the program is located by using the `which` command.
+
+```shell
+which juicefs
+```
+
+The path returned by the command is the location where the JuiceFS client is installed on your system. For other operating systems uninstallation methods follow the same pattern.
diff --git a/docs/en/grafana_template.json b/docs/en/grafana_template.json
new file mode 100644
index 0000000..ed134bc
--- /dev/null
+++ b/docs/en/grafana_template.json
@@ -0,0 +1,2215 @@
+{
+ "annotations": {
+ "list": [
+ {
+ "builtIn": 1,
+ "datasource": "-- Grafana --",
+ "enable": true,
+ "hide": true,
+ "iconColor": "rgba(0, 211, 255, 1)",
+ "name": "Annotations & Alerts",
+ "type": "dashboard"
+ }
+ ]
+ },
+ "editable": true,
+ "gnetId": null,
+ "graphTooltip": 0,
+ "id": 16,
+ "iteration": 1640078675219,
+ "links": [],
+ "panels": [
+ {
+ "aliasColors": {},
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "description": "",
+ "fieldConfig": {
+ "defaults": {
+ "unit": "bytes"
+ },
+ "overrides": []
+ },
+ "fill": 1,
+ "fillGradient": 0,
+ "gridPos": {
+ "h": 6,
+ "w": 8,
+ "x": 0,
+ "y": 0
+ },
+ "hiddenSeries": false,
+ "id": 2,
+ "legend": {
+ "avg": false,
+ "current": true,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": true
+ },
+ "lines": true,
+ "linewidth": 1,
+ "nullPointMode": "null",
+ "options": {
+ "alertThreshold": true
+ },
+ "percentage": false,
+ "pluginVersion": "8.0.6",
+ "pointradius": 2,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [],
+ "spaceLength": 10,
+ "stack": false,
+ "steppedLine": false,
+ "targets": [
+ {
+ "exemplar": true,
+ "expr": "avg(juicefs_used_space{vol_name=\"$name\"})",
+ "format": "time_series",
+ "instant": false,
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "Data Size",
+ "refId": "A"
+ }
+ ],
+ "thresholds": [],
+ "timeFrom": null,
+ "timeRegions": [],
+ "timeShift": null,
+ "title": "Data Size",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": []
+ },
+ "yaxes": [
+ {
+ "format": "bytes",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ },
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ }
+ ],
+ "yaxis": {
+ "align": false,
+ "alignLevel": null
+ }
+ },
+ {
+ "aliasColors": {},
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "description": "",
+ "fieldConfig": {
+ "defaults": {
+ "unit": "none"
+ },
+ "overrides": []
+ },
+ "fill": 1,
+ "fillGradient": 0,
+ "gridPos": {
+ "h": 6,
+ "w": 8,
+ "x": 8,
+ "y": 0
+ },
+ "hiddenSeries": false,
+ "id": 4,
+ "legend": {
+ "avg": false,
+ "current": true,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": true
+ },
+ "lines": true,
+ "linewidth": 1,
+ "nullPointMode": "null",
+ "options": {
+ "alertThreshold": true
+ },
+ "percentage": false,
+ "pluginVersion": "8.0.6",
+ "pointradius": 2,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [],
+ "spaceLength": 10,
+ "stack": false,
+ "steppedLine": false,
+ "targets": [
+ {
+ "exemplar": true,
+ "expr": "avg(juicefs_used_inodes{vol_name=\"$name\"})",
+ "format": "time_series",
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "Files",
+ "refId": "A"
+ }
+ ],
+ "thresholds": [],
+ "timeFrom": null,
+ "timeRegions": [],
+ "timeShift": null,
+ "title": "Files",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": []
+ },
+ "yaxes": [
+ {
+ "format": "none",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ },
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ }
+ ],
+ "yaxis": {
+ "align": false,
+ "alignLevel": null
+ }
+ },
+ {
+ "aliasColors": {},
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "fieldConfig": {
+ "defaults": {
+ "unit": "none"
+ },
+ "overrides": []
+ },
+ "fill": 1,
+ "fillGradient": 0,
+ "gridPos": {
+ "h": 6,
+ "w": 8,
+ "x": 16,
+ "y": 0
+ },
+ "hiddenSeries": false,
+ "id": 5,
+ "legend": {
+ "avg": false,
+ "current": true,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": true
+ },
+ "lines": true,
+ "linewidth": 1,
+ "nullPointMode": "null",
+ "options": {
+ "alertThreshold": true
+ },
+ "percentage": false,
+ "pluginVersion": "8.0.6",
+ "pointradius": 2,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [],
+ "spaceLength": 10,
+ "stack": false,
+ "steppedLine": false,
+ "targets": [
+ {
+ "expr": "count(juicefs_uptime{vol_name=\"$name\"})",
+ "format": "time_series",
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "Sessions",
+ "refId": "A"
+ }
+ ],
+ "thresholds": [],
+ "timeFrom": null,
+ "timeRegions": [],
+ "timeShift": null,
+ "title": "Client Sessions",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": []
+ },
+ "yaxes": [
+ {
+ "format": "none",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ },
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ }
+ ],
+ "yaxis": {
+ "align": false,
+ "alignLevel": null
+ }
+ },
+ {
+ "aliasColors": {},
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "description": "",
+ "fieldConfig": {
+ "defaults": {
+ "unit": "short"
+ },
+ "overrides": []
+ },
+ "fill": 1,
+ "fillGradient": 0,
+ "gridPos": {
+ "h": 6,
+ "w": 8,
+ "x": 0,
+ "y": 6
+ },
+ "hiddenSeries": false,
+ "id": 8,
+ "legend": {
+ "avg": false,
+ "current": true,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": true
+ },
+ "lines": true,
+ "linewidth": 1,
+ "links": [],
+ "nullPointMode": "null",
+ "options": {
+ "alertThreshold": true
+ },
+ "percentage": false,
+ "pluginVersion": "8.0.6",
+ "pointradius": 2,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [],
+ "spaceLength": 10,
+ "stack": false,
+ "steppedLine": false,
+ "targets": [
+ {
+ "exemplar": true,
+ "expr": "sum(rate(juicefs_fuse_ops_durations_histogram_seconds_count{vol_name=\"$name\"}[1m]) < 5000000000) by (instance)",
+ "format": "time_series",
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "Ops {{instance}}",
+ "refId": "A"
+ }
+ ],
+ "thresholds": [],
+ "timeFrom": null,
+ "timeRegions": [],
+ "timeShift": null,
+ "title": "Operations",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": []
+ },
+ "yaxes": [
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ },
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ }
+ ],
+ "yaxis": {
+ "align": false,
+ "alignLevel": null
+ }
+ },
+ {
+ "aliasColors": {},
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "description": "",
+ "fieldConfig": {
+ "defaults": {
+ "unit": "binBps"
+ },
+ "overrides": []
+ },
+ "fill": 1,
+ "fillGradient": 0,
+ "gridPos": {
+ "h": 6,
+ "w": 8,
+ "x": 8,
+ "y": 6
+ },
+ "hiddenSeries": false,
+ "id": 7,
+ "legend": {
+ "avg": false,
+ "current": true,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": true
+ },
+ "lines": true,
+ "linewidth": 1,
+ "nullPointMode": "null",
+ "options": {
+ "alertThreshold": true
+ },
+ "percentage": false,
+ "pluginVersion": "8.0.6",
+ "pointradius": 2,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [],
+ "spaceLength": 10,
+ "stack": false,
+ "steppedLine": false,
+ "targets": [
+ {
+ "exemplar": true,
+ "expr": "sum(rate(juicefs_fuse_written_size_bytes_sum{vol_name=\"$name\"}[1m]) < 5000000000) by (instance)",
+ "format": "time_series",
+ "instant": false,
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "Write {{instance}}",
+ "refId": "A"
+ },
+ {
+ "expr": "sum(rate(juicefs_fuse_read_size_bytes_sum{vol_name=\"$name\"}[1m]) < 5000000000) by (instance)",
+ "format": "time_series",
+ "hide": false,
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "Read {{instance}}",
+ "refId": "B"
+ }
+ ],
+ "thresholds": [],
+ "timeFrom": null,
+ "timeRegions": [],
+ "timeShift": null,
+ "title": "IO Throughput",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": []
+ },
+ "yaxes": [
+ {
+ "format": "binBps",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ },
+ {
+ "decimals": null,
+ "format": "Bps",
+ "label": "",
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ }
+ ],
+ "yaxis": {
+ "align": false,
+ "alignLevel": null
+ }
+ },
+ {
+ "aliasColors": {},
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "fieldConfig": {
+ "defaults": {
+ "unit": "µs"
+ },
+ "overrides": []
+ },
+ "fill": 1,
+ "fillGradient": 0,
+ "gridPos": {
+ "h": 6,
+ "w": 8,
+ "x": 16,
+ "y": 6
+ },
+ "hiddenSeries": false,
+ "id": 18,
+ "legend": {
+ "avg": false,
+ "current": true,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": true
+ },
+ "lines": true,
+ "linewidth": 1,
+ "nullPointMode": "null",
+ "options": {
+ "alertThreshold": true
+ },
+ "percentage": false,
+ "pluginVersion": "8.0.6",
+ "pointradius": 2,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [],
+ "spaceLength": 10,
+ "stack": false,
+ "steppedLine": false,
+ "targets": [
+ {
+ "expr": "sum(rate(juicefs_fuse_ops_durations_histogram_seconds_sum{vol_name=\"$name\"}[1m])) by (instance,mp) * 1000000 / sum(rate(juicefs_fuse_ops_durations_histogram_seconds_count{vol_name=\"$name\"}[1m])) by (instance,mp)",
+ "format": "time_series",
+ "hide": false,
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "{{instance}}:{{mp}}",
+ "refId": "A"
+ }
+ ],
+ "thresholds": [],
+ "timeFrom": null,
+ "timeRegions": [],
+ "timeShift": null,
+ "title": "IO Latency",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": []
+ },
+ "yaxes": [
+ {
+ "format": "µs",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ },
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ }
+ ],
+ "yaxis": {
+ "align": false,
+ "alignLevel": null
+ }
+ },
+ {
+ "aliasColors": {},
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "fieldConfig": {
+ "defaults": {
+ "unit": "short"
+ },
+ "overrides": []
+ },
+ "fill": 1,
+ "fillGradient": 0,
+ "gridPos": {
+ "h": 6,
+ "w": 8,
+ "x": 0,
+ "y": 12
+ },
+ "hiddenSeries": false,
+ "id": 13,
+ "legend": {
+ "avg": false,
+ "current": true,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": true
+ },
+ "lines": true,
+ "linewidth": 1,
+ "nullPointMode": "null",
+ "options": {
+ "alertThreshold": true
+ },
+ "percentage": false,
+ "pluginVersion": "8.0.6",
+ "pointradius": 2,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [],
+ "spaceLength": 10,
+ "stack": false,
+ "steppedLine": false,
+ "targets": [
+ {
+ "expr": "sum(rate(juicefs_transaction_durations_histogram_seconds_count{vol_name=\"$name\"}[1m])) by (instance)",
+ "format": "time_series",
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "{{instance}}",
+ "refId": "A"
+ }
+ ],
+ "thresholds": [],
+ "timeFrom": null,
+ "timeRegions": [],
+ "timeShift": null,
+ "title": "Transcations",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": []
+ },
+ "yaxes": [
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ },
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ }
+ ],
+ "yaxis": {
+ "align": false,
+ "alignLevel": null
+ }
+ },
+ {
+ "aliasColors": {},
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "fieldConfig": {
+ "defaults": {
+ "unit": "µs"
+ },
+ "overrides": []
+ },
+ "fill": 1,
+ "fillGradient": 0,
+ "gridPos": {
+ "h": 6,
+ "w": 8,
+ "x": 8,
+ "y": 12
+ },
+ "hiddenSeries": false,
+ "id": 14,
+ "legend": {
+ "avg": false,
+ "current": true,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": true
+ },
+ "lines": true,
+ "linewidth": 1,
+ "nullPointMode": "null",
+ "options": {
+ "alertThreshold": true
+ },
+ "percentage": false,
+ "pluginVersion": "8.0.6",
+ "pointradius": 2,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [],
+ "spaceLength": 10,
+ "stack": false,
+ "steppedLine": false,
+ "targets": [
+ {
+ "expr": "sum(rate(juicefs_transaction_durations_histogram_seconds_sum{vol_name=\"$name\"}[1m])) by (instance,mp) * 1000000 / sum(rate(juicefs_transaction_durations_histogram_seconds_count{vol_name=\"$name\"}[1m])) by (instance,mp)",
+ "format": "time_series",
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "{{instance}}:{{mp}}",
+ "refId": "A"
+ }
+ ],
+ "thresholds": [],
+ "timeFrom": null,
+ "timeRegions": [],
+ "timeShift": null,
+ "title": "Transcation Latency",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": []
+ },
+ "yaxes": [
+ {
+ "format": "µs",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ },
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ }
+ ],
+ "yaxis": {
+ "align": false,
+ "alignLevel": null
+ }
+ },
+ {
+ "aliasColors": {},
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "fill": 1,
+ "fillGradient": 0,
+ "gridPos": {
+ "h": 6,
+ "w": 8,
+ "x": 16,
+ "y": 12
+ },
+ "hiddenSeries": false,
+ "id": 20,
+ "legend": {
+ "avg": false,
+ "current": true,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": true
+ },
+ "lines": true,
+ "linewidth": 1,
+ "links": [],
+ "nullPointMode": "null",
+ "options": {
+ "alertThreshold": true
+ },
+ "percentage": false,
+ "pluginVersion": "8.0.6",
+ "pointradius": 5,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [],
+ "spaceLength": 10,
+ "stack": false,
+ "steppedLine": false,
+ "targets": [
+ {
+ "expr": "sum(rate(juicefs_transaction_restart{vol_name=~\"$name\"}[1m])) by (instance)",
+ "format": "time_series",
+ "intervalFactor": 1,
+ "legendFormat": "Restarts {{instance}}",
+ "refId": "A"
+ }
+ ],
+ "thresholds": [],
+ "timeFrom": null,
+ "timeRegions": [],
+ "timeShift": null,
+ "title": "Transaction Restarts",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": []
+ },
+ "yaxes": [
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ },
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ }
+ ],
+ "yaxis": {
+ "align": false,
+ "alignLevel": null
+ }
+ },
+ {
+ "aliasColors": {},
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "fieldConfig": {
+ "defaults": {
+ "unit": "short"
+ },
+ "overrides": []
+ },
+ "fill": 1,
+ "fillGradient": 0,
+ "gridPos": {
+ "h": 6,
+ "w": 8,
+ "x": 0,
+ "y": 18
+ },
+ "hiddenSeries": false,
+ "id": 15,
+ "legend": {
+ "avg": false,
+ "current": true,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": true
+ },
+ "lines": true,
+ "linewidth": 1,
+ "nullPointMode": "null",
+ "options": {
+ "alertThreshold": true
+ },
+ "percentage": false,
+ "pluginVersion": "8.0.6",
+ "pointradius": 2,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [],
+ "spaceLength": 10,
+ "stack": false,
+ "steppedLine": false,
+ "targets": [
+ {
+ "expr": "sum(rate(juicefs_object_request_durations_histogram_seconds_count{vol_name=\"$name\"}[1m])) by (method)",
+ "format": "time_series",
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "{{method}}",
+ "refId": "A"
+ },
+ {
+ "exemplar": true,
+ "expr": "sum(rate(juicefs_object_request_errors{vol_name=\"$name\"}[1m])) ",
+ "format": "time_series",
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "errors",
+ "refId": "B"
+ }
+ ],
+ "thresholds": [],
+ "timeFrom": null,
+ "timeRegions": [],
+ "timeShift": null,
+ "title": "Objects Requests",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": []
+ },
+ "yaxes": [
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ },
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ }
+ ],
+ "yaxis": {
+ "align": false,
+ "alignLevel": null
+ }
+ },
+ {
+ "aliasColors": {},
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "fill": 1,
+ "fillGradient": 0,
+ "gridPos": {
+ "h": 6,
+ "w": 8,
+ "x": 8,
+ "y": 18
+ },
+ "hiddenSeries": false,
+ "id": 17,
+ "legend": {
+ "avg": false,
+ "current": true,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": true
+ },
+ "lines": true,
+ "linewidth": 1,
+ "nullPointMode": "null",
+ "options": {
+ "alertThreshold": true
+ },
+ "percentage": false,
+ "pluginVersion": "8.0.6",
+ "pointradius": 2,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [],
+ "spaceLength": 10,
+ "stack": false,
+ "steppedLine": false,
+ "targets": [
+ {
+ "exemplar": true,
+ "expr": "sum(rate(juicefs_object_request_data_bytes{method=\"PUT\",vol_name=\"$name\"}[1m])) by (instance,method)",
+ "format": "time_series",
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "{{method}} {{instance}}",
+ "refId": "A"
+ },
+ {
+ "exemplar": true,
+ "expr": "sum(rate(juicefs_object_request_data_bytes{method=\"GET\",vol_name=\"$name\"}[1m])) by (instance,method)",
+ "format": "time_series",
+ "hide": false,
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "{{method}} {{instance}}",
+ "refId": "B"
+ },
+ {
+ "exemplar": true,
+ "expr": "sum(rate(juicefs_object_request_data_bytes{method=\"GET\",vol_name=\"$name\"}[1m]))",
+ "hide": false,
+ "interval": "",
+ "legendFormat": "Total",
+ "refId": "C"
+ }
+ ],
+ "thresholds": [],
+ "timeFrom": null,
+ "timeRegions": [],
+ "timeShift": null,
+ "title": "Objects Throughput",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": []
+ },
+ "yaxes": [
+ {
+ "$$hashKey": "object:145",
+ "format": "Bps",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ },
+ {
+ "$$hashKey": "object:146",
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ }
+ ],
+ "yaxis": {
+ "align": false,
+ "alignLevel": null
+ }
+ },
+ {
+ "aliasColors": {},
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "fieldConfig": {
+ "defaults": {
+ "unit": "µs"
+ },
+ "overrides": []
+ },
+ "fill": 1,
+ "fillGradient": 0,
+ "gridPos": {
+ "h": 6,
+ "w": 8,
+ "x": 16,
+ "y": 18
+ },
+ "hiddenSeries": false,
+ "id": 16,
+ "legend": {
+ "avg": false,
+ "current": true,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": true
+ },
+ "lines": true,
+ "linewidth": 1,
+ "nullPointMode": "null",
+ "options": {
+ "alertThreshold": true
+ },
+ "percentage": false,
+ "pluginVersion": "8.0.6",
+ "pointradius": 2,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [],
+ "spaceLength": 10,
+ "stack": false,
+ "steppedLine": false,
+ "targets": [
+ {
+ "expr": "sum(rate(juicefs_object_request_durations_histogram_seconds_sum{vol_name=\"$name\"}[1m])) by (instance) * 1000000 / sum(rate(juicefs_object_request_durations_histogram_seconds_count{vol_name=\"$name\"}[1m])) by (instance)",
+ "format": "time_series",
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "{{instance}}",
+ "refId": "A"
+ }
+ ],
+ "thresholds": [],
+ "timeFrom": null,
+ "timeRegions": [],
+ "timeShift": null,
+ "title": "Objects Latency",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": []
+ },
+ "yaxes": [
+ {
+ "format": "µs",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ },
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ }
+ ],
+ "yaxis": {
+ "align": false,
+ "alignLevel": null
+ }
+ },
+ {
+ "aliasColors": {},
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "fieldConfig": {
+ "defaults": {
+ "unit": "percent"
+ },
+ "overrides": []
+ },
+ "fill": 1,
+ "fillGradient": 0,
+ "gridPos": {
+ "h": 6,
+ "w": 8,
+ "x": 0,
+ "y": 24
+ },
+ "hiddenSeries": false,
+ "id": 10,
+ "legend": {
+ "avg": false,
+ "current": true,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": true
+ },
+ "lines": true,
+ "linewidth": 1,
+ "links": [],
+ "nullPointMode": "null",
+ "options": {
+ "alertThreshold": true
+ },
+ "percentage": false,
+ "pluginVersion": "8.0.6",
+ "pointradius": 2,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [],
+ "spaceLength": 10,
+ "stack": false,
+ "steppedLine": false,
+ "targets": [
+ {
+ "exemplar": true,
+ "expr": "sum(rate(juicefs_cpu_usage{vol_name=\"$name\"}[1m])*100 < 1000) by (instance,mp)",
+ "format": "time_series",
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "{{instance}}:{{mp}}",
+ "refId": "A"
+ }
+ ],
+ "thresholds": [],
+ "timeFrom": null,
+ "timeRegions": [],
+ "timeShift": null,
+ "title": "Client CPU Usage",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": []
+ },
+ "yaxes": [
+ {
+ "$$hashKey": "object:137",
+ "format": "percent",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ },
+ {
+ "$$hashKey": "object:138",
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ }
+ ],
+ "yaxis": {
+ "align": false,
+ "alignLevel": null
+ }
+ },
+ {
+ "aliasColors": {},
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "fieldConfig": {
+ "defaults": {
+ "unit": "bytes"
+ },
+ "overrides": []
+ },
+ "fill": 1,
+ "fillGradient": 0,
+ "gridPos": {
+ "h": 6,
+ "w": 8,
+ "x": 8,
+ "y": 24
+ },
+ "hiddenSeries": false,
+ "id": 11,
+ "legend": {
+ "avg": false,
+ "current": true,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": true
+ },
+ "lines": true,
+ "linewidth": 1,
+ "links": [],
+ "nullPointMode": "null",
+ "options": {
+ "alertThreshold": true
+ },
+ "percentage": false,
+ "pluginVersion": "8.0.6",
+ "pointradius": 2,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [],
+ "spaceLength": 10,
+ "stack": false,
+ "steppedLine": false,
+ "targets": [
+ {
+ "expr": "sum(juicefs_memory{vol_name=\"$name\"}) by (instance,mp)",
+ "format": "time_series",
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "{{instance}}:{{mp}}",
+ "refId": "A"
+ }
+ ],
+ "thresholds": [],
+ "timeFrom": null,
+ "timeRegions": [],
+ "timeShift": null,
+ "title": "Client Memory Usage",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": []
+ },
+ "yaxes": [
+ {
+ "format": "bytes",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ },
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ }
+ ],
+ "yaxis": {
+ "align": false,
+ "alignLevel": null
+ }
+ },
+ {
+ "aliasColors": {},
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "fieldConfig": {
+ "defaults": {
+ "unit": "none"
+ },
+ "overrides": []
+ },
+ "fill": 1,
+ "fillGradient": 0,
+ "gridPos": {
+ "h": 6,
+ "w": 8,
+ "x": 16,
+ "y": 24
+ },
+ "hiddenSeries": false,
+ "id": 21,
+ "legend": {
+ "avg": false,
+ "current": true,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": true
+ },
+ "lines": true,
+ "linewidth": 1,
+ "links": [],
+ "nullPointMode": "null",
+ "options": {
+ "alertThreshold": true
+ },
+ "percentage": false,
+ "pluginVersion": "8.0.6",
+ "pointradius": 2,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [],
+ "spaceLength": 10,
+ "stack": false,
+ "steppedLine": false,
+ "targets": [
+ {
+ "expr": "sum(go_goroutines) by (instance,mp)",
+ "format": "time_series",
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "{{instance}}:{{mp}}",
+ "refId": "A"
+ }
+ ],
+ "thresholds": [],
+ "timeFrom": null,
+ "timeRegions": [],
+ "timeShift": null,
+ "title": "Go threads",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": []
+ },
+ "yaxes": [
+ {
+ "format": "none",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ },
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ }
+ ],
+ "yaxis": {
+ "align": false,
+ "alignLevel": null
+ }
+ },
+ {
+ "aliasColors": {},
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "description": "",
+ "fieldConfig": {
+ "defaults": {
+ "unit": "bytes"
+ },
+ "overrides": []
+ },
+ "fill": 1,
+ "fillGradient": 0,
+ "gridPos": {
+ "h": 6,
+ "w": 8,
+ "x": 0,
+ "y": 30
+ },
+ "hiddenSeries": false,
+ "id": 22,
+ "legend": {
+ "avg": false,
+ "current": true,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": true
+ },
+ "lines": true,
+ "linewidth": 1,
+ "links": [],
+ "nullPointMode": "null",
+ "options": {
+ "alertThreshold": true
+ },
+ "percentage": false,
+ "pluginVersion": "8.0.6",
+ "pointradius": 2,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [],
+ "spaceLength": 10,
+ "stack": false,
+ "steppedLine": false,
+ "targets": [
+ {
+ "exemplar": true,
+ "expr": "sum(juicefs_blockcache_bytes{vol_name=\"$name\"}) by (instance,mp)",
+ "format": "time_series",
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "{{instance}}:{{mp}}",
+ "refId": "A"
+ }
+ ],
+ "thresholds": [],
+ "timeFrom": null,
+ "timeRegions": [],
+ "timeShift": null,
+ "title": "Block Cache Size",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": []
+ },
+ "yaxes": [
+ {
+ "format": "bytes",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ },
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ }
+ ],
+ "yaxis": {
+ "align": false,
+ "alignLevel": null
+ }
+ },
+ {
+ "aliasColors": {},
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "description": "",
+ "fieldConfig": {
+ "defaults": {
+ "unit": "none"
+ },
+ "overrides": []
+ },
+ "fill": 1,
+ "fillGradient": 0,
+ "gridPos": {
+ "h": 6,
+ "w": 8,
+ "x": 8,
+ "y": 30
+ },
+ "hiddenSeries": false,
+ "id": 23,
+ "legend": {
+ "avg": false,
+ "current": true,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": true
+ },
+ "lines": true,
+ "linewidth": 1,
+ "links": [],
+ "nullPointMode": "null",
+ "options": {
+ "alertThreshold": true
+ },
+ "percentage": false,
+ "pluginVersion": "8.0.6",
+ "pointradius": 2,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [],
+ "spaceLength": 10,
+ "stack": false,
+ "steppedLine": false,
+ "targets": [
+ {
+ "exemplar": true,
+ "expr": "sum(juicefs_blockcache_blocks{vol_name=\"$name\"}) by (instance,mp)",
+ "format": "time_series",
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "{{instance}}:{{mp}}",
+ "refId": "A"
+ }
+ ],
+ "thresholds": [],
+ "timeFrom": null,
+ "timeRegions": [],
+ "timeShift": null,
+ "title": "Block Cache Count",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": []
+ },
+ "yaxes": [
+ {
+ "format": "none",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ },
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ }
+ ],
+ "yaxis": {
+ "align": false,
+ "alignLevel": null
+ }
+ },
+ {
+ "aliasColors": {},
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "description": "",
+ "fieldConfig": {
+ "defaults": {
+ "unit": "percent"
+ },
+ "overrides": []
+ },
+ "fill": 1,
+ "fillGradient": 0,
+ "gridPos": {
+ "h": 6,
+ "w": 8,
+ "x": 16,
+ "y": 30
+ },
+ "hiddenSeries": false,
+ "id": 24,
+ "legend": {
+ "avg": false,
+ "current": true,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": true
+ },
+ "lines": true,
+ "linewidth": 1,
+ "links": [],
+ "nullPointMode": "null",
+ "options": {
+ "alertThreshold": true
+ },
+ "percentage": false,
+ "pluginVersion": "8.0.6",
+ "pointradius": 2,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [],
+ "spaceLength": 10,
+ "stack": false,
+ "steppedLine": false,
+ "targets": [
+ {
+ "exemplar": true,
+ "expr": "sum(rate(juicefs_blockcache_hits{vol_name=\"$name\"}[1m])) by (instance,mp) *100 / (sum(rate(juicefs_blockcache_hits{vol_name=\"$name\"}[1m])) by (instance,mp) + sum(rate(juicefs_blockcache_miss{vol_name=\"$name\"}[1m])) by (instance,mp))",
+ "format": "time_series",
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "Hits {{instance}}:{{mp}}",
+ "refId": "A"
+ },
+ {
+ "exemplar": true,
+ "expr": "sum(rate(juicefs_blockcache_hit_bytes{vol_name=\"$name\"}[1m])) by (instance,mp) *100 / (sum(rate(juicefs_blockcache_hit_bytes{vol_name=\"$name\"}[1m])) by (instance,mp) + sum(rate(juicefs_blockcache_miss_bytes{vol_name=\"$name\"}[1m])) by (instance,mp))",
+ "format": "time_series",
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "HitBytes {{instance}}:{{mp}}",
+ "refId": "B"
+ }
+ ],
+ "thresholds": [],
+ "timeFrom": null,
+ "timeRegions": [],
+ "timeShift": null,
+ "title": "Block Cache Hit Rate",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": []
+ },
+ "yaxes": [
+ {
+ "format": "percent",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ },
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ }
+ ],
+ "yaxis": {
+ "align": false,
+ "alignLevel": null
+ }
+ },
+ {
+ "aliasColors": {},
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "description": "",
+ "fieldConfig": {
+ "defaults": {
+ "unit": "percent"
+ },
+ "overrides": []
+ },
+ "fill": 1,
+ "fillGradient": 0,
+ "gridPos": {
+ "h": 6,
+ "w": 8,
+ "x": 0,
+ "y": 36
+ },
+ "hiddenSeries": false,
+ "id": 25,
+ "legend": {
+ "avg": false,
+ "current": true,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": true
+ },
+ "lines": true,
+ "linewidth": 1,
+ "links": [],
+ "nullPointMode": "null",
+ "options": {
+ "alertThreshold": true
+ },
+ "percentage": false,
+ "pluginVersion": "8.0.6",
+ "pointradius": 2,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [],
+ "spaceLength": 10,
+ "stack": false,
+ "steppedLine": false,
+ "targets": [
+ {
+ "exemplar": true,
+ "expr": "sum(rate(juicefs_compact_size_histogram_bytes_count{vol_name=\"$name\"}[1m])) by (instance,mp)",
+ "format": "time_series",
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "{{instance}}:{{mp}}",
+ "refId": "A"
+ }
+ ],
+ "thresholds": [],
+ "timeFrom": null,
+ "timeRegions": [],
+ "timeShift": null,
+ "title": "Compaction",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": []
+ },
+ "yaxes": [
+ {
+ "$$hashKey": "object:1080",
+ "format": "percent",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ },
+ {
+ "$$hashKey": "object:1081",
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": false
+ }
+ ],
+ "yaxis": {
+ "align": false,
+ "alignLevel": null
+ }
+ },
+ {
+ "aliasColors": {},
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "fieldConfig": {
+ "defaults": {
+ "unit": "percent"
+ },
+ "overrides": []
+ },
+ "fill": 1,
+ "fillGradient": 0,
+ "gridPos": {
+ "h": 6,
+ "w": 8,
+ "x": 8,
+ "y": 36
+ },
+ "hiddenSeries": false,
+ "id": 26,
+ "legend": {
+ "avg": false,
+ "current": true,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": true
+ },
+ "lines": true,
+ "linewidth": 1,
+ "links": [],
+ "nullPointMode": "null",
+ "options": {
+ "alertThreshold": true
+ },
+ "percentage": false,
+ "pluginVersion": "8.0.6",
+ "pointradius": 2,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [],
+ "spaceLength": 10,
+ "stack": false,
+ "steppedLine": false,
+ "targets": [
+ {
+ "exemplar": true,
+ "expr": "sum(rate(juicefs_compact_size_histogram_bytes_sum{vol_name=\"$name\"}[1m])) by (instance,mp)",
+ "format": "time_series",
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "{{instance}}:{{mp}}",
+ "refId": "A"
+ }
+ ],
+ "thresholds": [],
+ "timeFrom": null,
+ "timeRegions": [],
+ "timeShift": null,
+ "title": "Compacted Data",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": []
+ },
+ "yaxes": [
+ {
+ "format": "percent",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ },
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": false
+ }
+ ],
+ "yaxis": {
+ "align": false,
+ "alignLevel": null
+ }
+ },
+ {
+ "aliasColors": {},
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "description": "",
+ "fieldConfig": {
+ "defaults": {
+ "unit": "short"
+ },
+ "overrides": []
+ },
+ "fill": 1,
+ "fillGradient": 0,
+ "gridPos": {
+ "h": 6,
+ "w": 8,
+ "x": 16,
+ "y": 36
+ },
+ "hiddenSeries": false,
+ "id": 27,
+ "legend": {
+ "avg": false,
+ "current": true,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": true
+ },
+ "lines": true,
+ "linewidth": 1,
+ "links": [],
+ "nullPointMode": "null",
+ "options": {
+ "alertThreshold": true
+ },
+ "percentage": false,
+ "pluginVersion": "8.0.6",
+ "pointradius": 2,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [],
+ "spaceLength": 10,
+ "stack": false,
+ "steppedLine": false,
+ "targets": [
+ {
+ "exemplar": true,
+ "expr": "sum(juicefs_fuse_open_handlers{vol_name=\"$name\"}) by (instance,mp)",
+ "format": "time_series",
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "{{instance}}:{{mp}}",
+ "refId": "A"
+ }
+ ],
+ "thresholds": [],
+ "timeFrom": null,
+ "timeRegions": [],
+ "timeShift": null,
+ "title": "Open File Handlers",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": []
+ },
+ "yaxes": [
+ {
+ "$$hashKey": "object:921",
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ },
+ {
+ "$$hashKey": "object:922",
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": false
+ }
+ ],
+ "yaxis": {
+ "align": false,
+ "alignLevel": null
+ }
+ }
+ ],
+ "refresh": "",
+ "schemaVersion": 30,
+ "style": "dark",
+ "tags": [],
+ "templating": {
+ "list": [
+ {
+ "current": {
+ "selected": true,
+ "text": "juicefs",
+ "value": "juicefs"
+ },
+ "hide": 0,
+ "label": null,
+ "multi": false,
+ "name": "datasource",
+ "options": [],
+ "query": "prometheus",
+ "refresh": 1,
+ "regex": "",
+ "skipUrlSync": false,
+ "type": "datasource"
+ },
+ {
+ "allValue": null,
+ "current": {},
+ "datasource": "${datasource}",
+ "definition": "label_values(juicefs_uptime, vol_name)",
+ "description": null,
+ "error": null,
+ "hide": 0,
+ "includeAll": false,
+ "label": null,
+ "multi": false,
+ "name": "name",
+ "options": [],
+ "query": {
+ "query": "label_values(juicefs_uptime, vol_name)",
+ "refId": "StandardVariableQuery"
+ },
+ "refresh": 1,
+ "regex": "",
+ "skipUrlSync": false,
+ "sort": 0,
+ "type": "query"
+ }
+ ]
+ },
+ "time": {
+ "from": "now-1h",
+ "to": "now"
+ },
+ "timepicker": {
+ "refresh_intervals": [
+ "30s",
+ "1m",
+ "5m",
+ "15m",
+ "30m",
+ "1h",
+ "2h",
+ "1d"
+ ],
+ "time_options": [
+ "5m",
+ "15m",
+ "1h",
+ "6h",
+ "12h",
+ "24h",
+ "2d",
+ "7d",
+ "30d"
+ ]
+ },
+ "timezone": "",
+ "title": "JuiceFS Dashboard",
+ "uid": "-hm07csGk",
+ "version": 3
+}
diff --git a/docs/en/grafana_template_k8s.json b/docs/en/grafana_template_k8s.json
new file mode 100644
index 0000000..fea21d0
--- /dev/null
+++ b/docs/en/grafana_template_k8s.json
@@ -0,0 +1,2215 @@
+{
+ "annotations": {
+ "list": [
+ {
+ "builtIn": 1,
+ "datasource": "-- Grafana --",
+ "enable": true,
+ "hide": true,
+ "iconColor": "rgba(0, 211, 255, 1)",
+ "name": "Annotations & Alerts",
+ "type": "dashboard"
+ }
+ ]
+ },
+ "editable": true,
+ "gnetId": null,
+ "graphTooltip": 0,
+ "id": 16,
+ "iteration": 1640078675219,
+ "links": [],
+ "panels": [
+ {
+ "aliasColors": {},
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "description": "",
+ "fieldConfig": {
+ "defaults": {
+ "unit": "bytes"
+ },
+ "overrides": []
+ },
+ "fill": 1,
+ "fillGradient": 0,
+ "gridPos": {
+ "h": 6,
+ "w": 8,
+ "x": 0,
+ "y": 0
+ },
+ "hiddenSeries": false,
+ "id": 2,
+ "legend": {
+ "avg": false,
+ "current": true,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": true
+ },
+ "lines": true,
+ "linewidth": 1,
+ "nullPointMode": "null",
+ "options": {
+ "alertThreshold": true
+ },
+ "percentage": false,
+ "pluginVersion": "8.0.6",
+ "pointradius": 2,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [],
+ "spaceLength": 10,
+ "stack": false,
+ "steppedLine": false,
+ "targets": [
+ {
+ "exemplar": true,
+ "expr": "avg(juicefs_used_space{vol_name=\"$name\"})",
+ "format": "time_series",
+ "instant": false,
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "Data Size",
+ "refId": "A"
+ }
+ ],
+ "thresholds": [],
+ "timeFrom": null,
+ "timeRegions": [],
+ "timeShift": null,
+ "title": "Data Size",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": []
+ },
+ "yaxes": [
+ {
+ "format": "bytes",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ },
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ }
+ ],
+ "yaxis": {
+ "align": false,
+ "alignLevel": null
+ }
+ },
+ {
+ "aliasColors": {},
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "description": "",
+ "fieldConfig": {
+ "defaults": {
+ "unit": "none"
+ },
+ "overrides": []
+ },
+ "fill": 1,
+ "fillGradient": 0,
+ "gridPos": {
+ "h": 6,
+ "w": 8,
+ "x": 8,
+ "y": 0
+ },
+ "hiddenSeries": false,
+ "id": 4,
+ "legend": {
+ "avg": false,
+ "current": true,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": true
+ },
+ "lines": true,
+ "linewidth": 1,
+ "nullPointMode": "null",
+ "options": {
+ "alertThreshold": true
+ },
+ "percentage": false,
+ "pluginVersion": "8.0.6",
+ "pointradius": 2,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [],
+ "spaceLength": 10,
+ "stack": false,
+ "steppedLine": false,
+ "targets": [
+ {
+ "exemplar": true,
+ "expr": "avg(juicefs_used_inodes{vol_name=\"$name\"})",
+ "format": "time_series",
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "Files",
+ "refId": "A"
+ }
+ ],
+ "thresholds": [],
+ "timeFrom": null,
+ "timeRegions": [],
+ "timeShift": null,
+ "title": "Files",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": []
+ },
+ "yaxes": [
+ {
+ "format": "none",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ },
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ }
+ ],
+ "yaxis": {
+ "align": false,
+ "alignLevel": null
+ }
+ },
+ {
+ "aliasColors": {},
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "fieldConfig": {
+ "defaults": {
+ "unit": "none"
+ },
+ "overrides": []
+ },
+ "fill": 1,
+ "fillGradient": 0,
+ "gridPos": {
+ "h": 6,
+ "w": 8,
+ "x": 16,
+ "y": 0
+ },
+ "hiddenSeries": false,
+ "id": 5,
+ "legend": {
+ "avg": false,
+ "current": true,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": true
+ },
+ "lines": true,
+ "linewidth": 1,
+ "nullPointMode": "null",
+ "options": {
+ "alertThreshold": true
+ },
+ "percentage": false,
+ "pluginVersion": "8.0.6",
+ "pointradius": 2,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [],
+ "spaceLength": 10,
+ "stack": false,
+ "steppedLine": false,
+ "targets": [
+ {
+ "expr": "count(juicefs_uptime{vol_name=\"$name\"})",
+ "format": "time_series",
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "Sessions",
+ "refId": "A"
+ }
+ ],
+ "thresholds": [],
+ "timeFrom": null,
+ "timeRegions": [],
+ "timeShift": null,
+ "title": "Client Sessions",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": []
+ },
+ "yaxes": [
+ {
+ "format": "none",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ },
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ }
+ ],
+ "yaxis": {
+ "align": false,
+ "alignLevel": null
+ }
+ },
+ {
+ "aliasColors": {},
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "description": "",
+ "fieldConfig": {
+ "defaults": {
+ "unit": "short"
+ },
+ "overrides": []
+ },
+ "fill": 1,
+ "fillGradient": 0,
+ "gridPos": {
+ "h": 6,
+ "w": 8,
+ "x": 0,
+ "y": 6
+ },
+ "hiddenSeries": false,
+ "id": 8,
+ "legend": {
+ "avg": false,
+ "current": true,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": true
+ },
+ "lines": true,
+ "linewidth": 1,
+ "links": [],
+ "nullPointMode": "null",
+ "options": {
+ "alertThreshold": true
+ },
+ "percentage": false,
+ "pluginVersion": "8.0.6",
+ "pointradius": 2,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [],
+ "spaceLength": 10,
+ "stack": false,
+ "steppedLine": false,
+ "targets": [
+ {
+ "exemplar": true,
+ "expr": "sum(rate(juicefs_fuse_ops_durations_histogram_seconds_count{vol_name=\"$name\"}[1m]) < 5000000000) by (node)",
+ "format": "time_series",
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "Ops {{node}}",
+ "refId": "A"
+ }
+ ],
+ "thresholds": [],
+ "timeFrom": null,
+ "timeRegions": [],
+ "timeShift": null,
+ "title": "Operations",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": []
+ },
+ "yaxes": [
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ },
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ }
+ ],
+ "yaxis": {
+ "align": false,
+ "alignLevel": null
+ }
+ },
+ {
+ "aliasColors": {},
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "description": "",
+ "fieldConfig": {
+ "defaults": {
+ "unit": "binBps"
+ },
+ "overrides": []
+ },
+ "fill": 1,
+ "fillGradient": 0,
+ "gridPos": {
+ "h": 6,
+ "w": 8,
+ "x": 8,
+ "y": 6
+ },
+ "hiddenSeries": false,
+ "id": 7,
+ "legend": {
+ "avg": false,
+ "current": true,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": true
+ },
+ "lines": true,
+ "linewidth": 1,
+ "nullPointMode": "null",
+ "options": {
+ "alertThreshold": true
+ },
+ "percentage": false,
+ "pluginVersion": "8.0.6",
+ "pointradius": 2,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [],
+ "spaceLength": 10,
+ "stack": false,
+ "steppedLine": false,
+ "targets": [
+ {
+ "exemplar": true,
+ "expr": "sum(rate(juicefs_fuse_written_size_bytes_sum{vol_name=\"$name\"}[1m]) < 5000000000) by (node)",
+ "format": "time_series",
+ "instant": false,
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "Write {{node}}",
+ "refId": "A"
+ },
+ {
+ "expr": "sum(rate(juicefs_fuse_read_size_bytes_sum{vol_name=\"$name\"}[1m]) < 5000000000) by (node)",
+ "format": "time_series",
+ "hide": false,
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "Read {{node}}",
+ "refId": "B"
+ }
+ ],
+ "thresholds": [],
+ "timeFrom": null,
+ "timeRegions": [],
+ "timeShift": null,
+ "title": "IO Throughput",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": []
+ },
+ "yaxes": [
+ {
+ "format": "binBps",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ },
+ {
+ "decimals": null,
+ "format": "Bps",
+ "label": "",
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ }
+ ],
+ "yaxis": {
+ "align": false,
+ "alignLevel": null
+ }
+ },
+ {
+ "aliasColors": {},
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "fieldConfig": {
+ "defaults": {
+ "unit": "µs"
+ },
+ "overrides": []
+ },
+ "fill": 1,
+ "fillGradient": 0,
+ "gridPos": {
+ "h": 6,
+ "w": 8,
+ "x": 16,
+ "y": 6
+ },
+ "hiddenSeries": false,
+ "id": 18,
+ "legend": {
+ "avg": false,
+ "current": true,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": true
+ },
+ "lines": true,
+ "linewidth": 1,
+ "nullPointMode": "null",
+ "options": {
+ "alertThreshold": true
+ },
+ "percentage": false,
+ "pluginVersion": "8.0.6",
+ "pointradius": 2,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [],
+ "spaceLength": 10,
+ "stack": false,
+ "steppedLine": false,
+ "targets": [
+ {
+ "expr": "sum(rate(juicefs_fuse_ops_durations_histogram_seconds_sum{vol_name=\"$name\"}[1m])) by (node,mp) * 1000000 / sum(rate(juicefs_fuse_ops_durations_histogram_seconds_count{vol_name=\"$name\"}[1m])) by (node,mp)",
+ "format": "time_series",
+ "hide": false,
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "{{node}}:{{mp}}",
+ "refId": "A"
+ }
+ ],
+ "thresholds": [],
+ "timeFrom": null,
+ "timeRegions": [],
+ "timeShift": null,
+ "title": "IO Latency",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": []
+ },
+ "yaxes": [
+ {
+ "format": "µs",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ },
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ }
+ ],
+ "yaxis": {
+ "align": false,
+ "alignLevel": null
+ }
+ },
+ {
+ "aliasColors": {},
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "fieldConfig": {
+ "defaults": {
+ "unit": "short"
+ },
+ "overrides": []
+ },
+ "fill": 1,
+ "fillGradient": 0,
+ "gridPos": {
+ "h": 6,
+ "w": 8,
+ "x": 0,
+ "y": 12
+ },
+ "hiddenSeries": false,
+ "id": 13,
+ "legend": {
+ "avg": false,
+ "current": true,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": true
+ },
+ "lines": true,
+ "linewidth": 1,
+ "nullPointMode": "null",
+ "options": {
+ "alertThreshold": true
+ },
+ "percentage": false,
+ "pluginVersion": "8.0.6",
+ "pointradius": 2,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [],
+ "spaceLength": 10,
+ "stack": false,
+ "steppedLine": false,
+ "targets": [
+ {
+ "expr": "sum(rate(juicefs_transaction_durations_histogram_seconds_count{vol_name=\"$name\"}[1m])) by (node)",
+ "format": "time_series",
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "{{node}}",
+ "refId": "A"
+ }
+ ],
+ "thresholds": [],
+ "timeFrom": null,
+ "timeRegions": [],
+ "timeShift": null,
+ "title": "Transcations",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": []
+ },
+ "yaxes": [
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ },
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ }
+ ],
+ "yaxis": {
+ "align": false,
+ "alignLevel": null
+ }
+ },
+ {
+ "aliasColors": {},
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "fieldConfig": {
+ "defaults": {
+ "unit": "µs"
+ },
+ "overrides": []
+ },
+ "fill": 1,
+ "fillGradient": 0,
+ "gridPos": {
+ "h": 6,
+ "w": 8,
+ "x": 8,
+ "y": 12
+ },
+ "hiddenSeries": false,
+ "id": 14,
+ "legend": {
+ "avg": false,
+ "current": true,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": true
+ },
+ "lines": true,
+ "linewidth": 1,
+ "nullPointMode": "null",
+ "options": {
+ "alertThreshold": true
+ },
+ "percentage": false,
+ "pluginVersion": "8.0.6",
+ "pointradius": 2,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [],
+ "spaceLength": 10,
+ "stack": false,
+ "steppedLine": false,
+ "targets": [
+ {
+ "expr": "sum(rate(juicefs_transaction_durations_histogram_seconds_sum{vol_name=\"$name\"}[1m])) by (node,mp) * 1000000 / sum(rate(juicefs_transaction_durations_histogram_seconds_count{vol_name=\"$name\"}[1m])) by (node,mp)",
+ "format": "time_series",
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "{{node}}:{{mp}}",
+ "refId": "A"
+ }
+ ],
+ "thresholds": [],
+ "timeFrom": null,
+ "timeRegions": [],
+ "timeShift": null,
+ "title": "Transcation Latency",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": []
+ },
+ "yaxes": [
+ {
+ "format": "µs",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ },
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ }
+ ],
+ "yaxis": {
+ "align": false,
+ "alignLevel": null
+ }
+ },
+ {
+ "aliasColors": {},
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "fill": 1,
+ "fillGradient": 0,
+ "gridPos": {
+ "h": 6,
+ "w": 8,
+ "x": 16,
+ "y": 12
+ },
+ "hiddenSeries": false,
+ "id": 20,
+ "legend": {
+ "avg": false,
+ "current": true,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": true
+ },
+ "lines": true,
+ "linewidth": 1,
+ "links": [],
+ "nullPointMode": "null",
+ "options": {
+ "alertThreshold": true
+ },
+ "percentage": false,
+ "pluginVersion": "8.0.6",
+ "pointradius": 5,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [],
+ "spaceLength": 10,
+ "stack": false,
+ "steppedLine": false,
+ "targets": [
+ {
+ "expr": "sum(rate(juicefs_transaction_restart{vol_name=~\"$name\"}[1m])) by (node)",
+ "format": "time_series",
+ "intervalFactor": 1,
+ "legendFormat": "Restarts {{node}}",
+ "refId": "A"
+ }
+ ],
+ "thresholds": [],
+ "timeFrom": null,
+ "timeRegions": [],
+ "timeShift": null,
+ "title": "Transaction Restarts",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": []
+ },
+ "yaxes": [
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ },
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ }
+ ],
+ "yaxis": {
+ "align": false,
+ "alignLevel": null
+ }
+ },
+ {
+ "aliasColors": {},
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "fieldConfig": {
+ "defaults": {
+ "unit": "short"
+ },
+ "overrides": []
+ },
+ "fill": 1,
+ "fillGradient": 0,
+ "gridPos": {
+ "h": 6,
+ "w": 8,
+ "x": 0,
+ "y": 18
+ },
+ "hiddenSeries": false,
+ "id": 15,
+ "legend": {
+ "avg": false,
+ "current": true,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": true
+ },
+ "lines": true,
+ "linewidth": 1,
+ "nullPointMode": "null",
+ "options": {
+ "alertThreshold": true
+ },
+ "percentage": false,
+ "pluginVersion": "8.0.6",
+ "pointradius": 2,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [],
+ "spaceLength": 10,
+ "stack": false,
+ "steppedLine": false,
+ "targets": [
+ {
+ "expr": "sum(rate(juicefs_object_request_durations_histogram_seconds_count{vol_name=\"$name\"}[1m])) by (method)",
+ "format": "time_series",
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "{{method}}",
+ "refId": "A"
+ },
+ {
+ "exemplar": true,
+ "expr": "sum(rate(juicefs_object_request_errors{vol_name=\"$name\"}[1m])) ",
+ "format": "time_series",
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "errors",
+ "refId": "B"
+ }
+ ],
+ "thresholds": [],
+ "timeFrom": null,
+ "timeRegions": [],
+ "timeShift": null,
+ "title": "Objects Requests",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": []
+ },
+ "yaxes": [
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ },
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ }
+ ],
+ "yaxis": {
+ "align": false,
+ "alignLevel": null
+ }
+ },
+ {
+ "aliasColors": {},
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "fill": 1,
+ "fillGradient": 0,
+ "gridPos": {
+ "h": 6,
+ "w": 8,
+ "x": 8,
+ "y": 18
+ },
+ "hiddenSeries": false,
+ "id": 17,
+ "legend": {
+ "avg": false,
+ "current": true,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": true
+ },
+ "lines": true,
+ "linewidth": 1,
+ "nullPointMode": "null",
+ "options": {
+ "alertThreshold": true
+ },
+ "percentage": false,
+ "pluginVersion": "8.0.6",
+ "pointradius": 2,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [],
+ "spaceLength": 10,
+ "stack": false,
+ "steppedLine": false,
+ "targets": [
+ {
+ "exemplar": true,
+ "expr": "sum(rate(juicefs_object_request_data_bytes{method=\"PUT\",vol_name=\"$name\"}[1m])) by (node,method)",
+ "format": "time_series",
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "{{method}} {{node}}",
+ "refId": "A"
+ },
+ {
+ "exemplar": true,
+ "expr": "sum(rate(juicefs_object_request_data_bytes{method=\"GET\",vol_name=\"$name\"}[1m])) by (node,method)",
+ "format": "time_series",
+ "hide": false,
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "{{method}} {{node}}",
+ "refId": "B"
+ },
+ {
+ "exemplar": true,
+ "expr": "sum(rate(juicefs_object_request_data_bytes{method=\"GET\",vol_name=\"$name\"}[1m]))",
+ "hide": false,
+ "interval": "",
+ "legendFormat": "Total",
+ "refId": "C"
+ }
+ ],
+ "thresholds": [],
+ "timeFrom": null,
+ "timeRegions": [],
+ "timeShift": null,
+ "title": "Objects Throughput",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": []
+ },
+ "yaxes": [
+ {
+ "$$hashKey": "object:145",
+ "format": "Bps",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ },
+ {
+ "$$hashKey": "object:146",
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ }
+ ],
+ "yaxis": {
+ "align": false,
+ "alignLevel": null
+ }
+ },
+ {
+ "aliasColors": {},
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "fieldConfig": {
+ "defaults": {
+ "unit": "µs"
+ },
+ "overrides": []
+ },
+ "fill": 1,
+ "fillGradient": 0,
+ "gridPos": {
+ "h": 6,
+ "w": 8,
+ "x": 16,
+ "y": 18
+ },
+ "hiddenSeries": false,
+ "id": 16,
+ "legend": {
+ "avg": false,
+ "current": true,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": true
+ },
+ "lines": true,
+ "linewidth": 1,
+ "nullPointMode": "null",
+ "options": {
+ "alertThreshold": true
+ },
+ "percentage": false,
+ "pluginVersion": "8.0.6",
+ "pointradius": 2,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [],
+ "spaceLength": 10,
+ "stack": false,
+ "steppedLine": false,
+ "targets": [
+ {
+ "expr": "sum(rate(juicefs_object_request_durations_histogram_seconds_sum{vol_name=\"$name\"}[1m])) by (node) * 1000000 / sum(rate(juicefs_object_request_durations_histogram_seconds_count{vol_name=\"$name\"}[1m])) by (node)",
+ "format": "time_series",
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "{{node}}",
+ "refId": "A"
+ }
+ ],
+ "thresholds": [],
+ "timeFrom": null,
+ "timeRegions": [],
+ "timeShift": null,
+ "title": "Objects Latency",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": []
+ },
+ "yaxes": [
+ {
+ "format": "µs",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ },
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ }
+ ],
+ "yaxis": {
+ "align": false,
+ "alignLevel": null
+ }
+ },
+ {
+ "aliasColors": {},
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "fieldConfig": {
+ "defaults": {
+ "unit": "percent"
+ },
+ "overrides": []
+ },
+ "fill": 1,
+ "fillGradient": 0,
+ "gridPos": {
+ "h": 6,
+ "w": 8,
+ "x": 0,
+ "y": 24
+ },
+ "hiddenSeries": false,
+ "id": 10,
+ "legend": {
+ "avg": false,
+ "current": true,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": true
+ },
+ "lines": true,
+ "linewidth": 1,
+ "links": [],
+ "nullPointMode": "null",
+ "options": {
+ "alertThreshold": true
+ },
+ "percentage": false,
+ "pluginVersion": "8.0.6",
+ "pointradius": 2,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [],
+ "spaceLength": 10,
+ "stack": false,
+ "steppedLine": false,
+ "targets": [
+ {
+ "exemplar": true,
+ "expr": "sum(rate(juicefs_cpu_usage{vol_name=\"$name\"}[1m])*100 < 1000) by (node,mp)",
+ "format": "time_series",
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "{{node}}:{{mp}}",
+ "refId": "A"
+ }
+ ],
+ "thresholds": [],
+ "timeFrom": null,
+ "timeRegions": [],
+ "timeShift": null,
+ "title": "Client CPU Usage",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": []
+ },
+ "yaxes": [
+ {
+ "$$hashKey": "object:137",
+ "format": "percent",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ },
+ {
+ "$$hashKey": "object:138",
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ }
+ ],
+ "yaxis": {
+ "align": false,
+ "alignLevel": null
+ }
+ },
+ {
+ "aliasColors": {},
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "fieldConfig": {
+ "defaults": {
+ "unit": "bytes"
+ },
+ "overrides": []
+ },
+ "fill": 1,
+ "fillGradient": 0,
+ "gridPos": {
+ "h": 6,
+ "w": 8,
+ "x": 8,
+ "y": 24
+ },
+ "hiddenSeries": false,
+ "id": 11,
+ "legend": {
+ "avg": false,
+ "current": true,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": true
+ },
+ "lines": true,
+ "linewidth": 1,
+ "links": [],
+ "nullPointMode": "null",
+ "options": {
+ "alertThreshold": true
+ },
+ "percentage": false,
+ "pluginVersion": "8.0.6",
+ "pointradius": 2,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [],
+ "spaceLength": 10,
+ "stack": false,
+ "steppedLine": false,
+ "targets": [
+ {
+ "expr": "sum(juicefs_memory{vol_name=\"$name\"}) by (node,mp)",
+ "format": "time_series",
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "{{node}}:{{mp}}",
+ "refId": "A"
+ }
+ ],
+ "thresholds": [],
+ "timeFrom": null,
+ "timeRegions": [],
+ "timeShift": null,
+ "title": "Client Memory Usage",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": []
+ },
+ "yaxes": [
+ {
+ "format": "bytes",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ },
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ }
+ ],
+ "yaxis": {
+ "align": false,
+ "alignLevel": null
+ }
+ },
+ {
+ "aliasColors": {},
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "fieldConfig": {
+ "defaults": {
+ "unit": "none"
+ },
+ "overrides": []
+ },
+ "fill": 1,
+ "fillGradient": 0,
+ "gridPos": {
+ "h": 6,
+ "w": 8,
+ "x": 16,
+ "y": 24
+ },
+ "hiddenSeries": false,
+ "id": 21,
+ "legend": {
+ "avg": false,
+ "current": true,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": true
+ },
+ "lines": true,
+ "linewidth": 1,
+ "links": [],
+ "nullPointMode": "null",
+ "options": {
+ "alertThreshold": true
+ },
+ "percentage": false,
+ "pluginVersion": "8.0.6",
+ "pointradius": 2,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [],
+ "spaceLength": 10,
+ "stack": false,
+ "steppedLine": false,
+ "targets": [
+ {
+ "expr": "sum(go_goroutines) by (node,mp)",
+ "format": "time_series",
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "{{node}}:{{mp}}",
+ "refId": "A"
+ }
+ ],
+ "thresholds": [],
+ "timeFrom": null,
+ "timeRegions": [],
+ "timeShift": null,
+ "title": "Go threads",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": []
+ },
+ "yaxes": [
+ {
+ "format": "none",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ },
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ }
+ ],
+ "yaxis": {
+ "align": false,
+ "alignLevel": null
+ }
+ },
+ {
+ "aliasColors": {},
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "description": "",
+ "fieldConfig": {
+ "defaults": {
+ "unit": "bytes"
+ },
+ "overrides": []
+ },
+ "fill": 1,
+ "fillGradient": 0,
+ "gridPos": {
+ "h": 6,
+ "w": 8,
+ "x": 0,
+ "y": 30
+ },
+ "hiddenSeries": false,
+ "id": 22,
+ "legend": {
+ "avg": false,
+ "current": true,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": true
+ },
+ "lines": true,
+ "linewidth": 1,
+ "links": [],
+ "nullPointMode": "null",
+ "options": {
+ "alertThreshold": true
+ },
+ "percentage": false,
+ "pluginVersion": "8.0.6",
+ "pointradius": 2,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [],
+ "spaceLength": 10,
+ "stack": false,
+ "steppedLine": false,
+ "targets": [
+ {
+ "exemplar": true,
+ "expr": "sum(juicefs_blockcache_bytes{vol_name=\"$name\"}) by (node,mp)",
+ "format": "time_series",
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "{{node}}:{{mp}}",
+ "refId": "A"
+ }
+ ],
+ "thresholds": [],
+ "timeFrom": null,
+ "timeRegions": [],
+ "timeShift": null,
+ "title": "Block Cache Size",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": []
+ },
+ "yaxes": [
+ {
+ "format": "bytes",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ },
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ }
+ ],
+ "yaxis": {
+ "align": false,
+ "alignLevel": null
+ }
+ },
+ {
+ "aliasColors": {},
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "description": "",
+ "fieldConfig": {
+ "defaults": {
+ "unit": "none"
+ },
+ "overrides": []
+ },
+ "fill": 1,
+ "fillGradient": 0,
+ "gridPos": {
+ "h": 6,
+ "w": 8,
+ "x": 8,
+ "y": 30
+ },
+ "hiddenSeries": false,
+ "id": 23,
+ "legend": {
+ "avg": false,
+ "current": true,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": true
+ },
+ "lines": true,
+ "linewidth": 1,
+ "links": [],
+ "nullPointMode": "null",
+ "options": {
+ "alertThreshold": true
+ },
+ "percentage": false,
+ "pluginVersion": "8.0.6",
+ "pointradius": 2,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [],
+ "spaceLength": 10,
+ "stack": false,
+ "steppedLine": false,
+ "targets": [
+ {
+ "exemplar": true,
+ "expr": "sum(juicefs_blockcache_blocks{vol_name=\"$name\"}) by (node,mp)",
+ "format": "time_series",
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "{{node}}:{{mp}}",
+ "refId": "A"
+ }
+ ],
+ "thresholds": [],
+ "timeFrom": null,
+ "timeRegions": [],
+ "timeShift": null,
+ "title": "Block Cache Count",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": []
+ },
+ "yaxes": [
+ {
+ "format": "none",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ },
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ }
+ ],
+ "yaxis": {
+ "align": false,
+ "alignLevel": null
+ }
+ },
+ {
+ "aliasColors": {},
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "description": "",
+ "fieldConfig": {
+ "defaults": {
+ "unit": "percent"
+ },
+ "overrides": []
+ },
+ "fill": 1,
+ "fillGradient": 0,
+ "gridPos": {
+ "h": 6,
+ "w": 8,
+ "x": 16,
+ "y": 30
+ },
+ "hiddenSeries": false,
+ "id": 24,
+ "legend": {
+ "avg": false,
+ "current": true,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": true
+ },
+ "lines": true,
+ "linewidth": 1,
+ "links": [],
+ "nullPointMode": "null",
+ "options": {
+ "alertThreshold": true
+ },
+ "percentage": false,
+ "pluginVersion": "8.0.6",
+ "pointradius": 2,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [],
+ "spaceLength": 10,
+ "stack": false,
+ "steppedLine": false,
+ "targets": [
+ {
+ "exemplar": true,
+ "expr": "sum(rate(juicefs_blockcache_hits{vol_name=\"$name\"}[1m])) by (node,mp) *100 / (sum(rate(juicefs_blockcache_hits{vol_name=\"$name\"}[1m])) by (node,mp) + sum(rate(juicefs_blockcache_miss{vol_name=\"$name\"}[1m])) by (node,mp))",
+ "format": "time_series",
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "Hits {{node}}:{{mp}}",
+ "refId": "A"
+ },
+ {
+ "exemplar": true,
+ "expr": "sum(rate(juicefs_blockcache_hit_bytes{vol_name=\"$name\"}[1m])) by (node,mp) *100 / (sum(rate(juicefs_blockcache_hit_bytes{vol_name=\"$name\"}[1m])) by (node,mp) + sum(rate(juicefs_blockcache_miss_bytes{vol_name=\"$name\"}[1m])) by (node,mp))",
+ "format": "time_series",
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "HitBytes {{node}}:{{mp}}",
+ "refId": "B"
+ }
+ ],
+ "thresholds": [],
+ "timeFrom": null,
+ "timeRegions": [],
+ "timeShift": null,
+ "title": "Block Cache Hit Rate",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": []
+ },
+ "yaxes": [
+ {
+ "format": "percent",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ },
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ }
+ ],
+ "yaxis": {
+ "align": false,
+ "alignLevel": null
+ }
+ },
+ {
+ "aliasColors": {},
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "description": "",
+ "fieldConfig": {
+ "defaults": {
+ "unit": "percent"
+ },
+ "overrides": []
+ },
+ "fill": 1,
+ "fillGradient": 0,
+ "gridPos": {
+ "h": 6,
+ "w": 8,
+ "x": 0,
+ "y": 36
+ },
+ "hiddenSeries": false,
+ "id": 25,
+ "legend": {
+ "avg": false,
+ "current": true,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": true
+ },
+ "lines": true,
+ "linewidth": 1,
+ "links": [],
+ "nullPointMode": "null",
+ "options": {
+ "alertThreshold": true
+ },
+ "percentage": false,
+ "pluginVersion": "8.0.6",
+ "pointradius": 2,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [],
+ "spaceLength": 10,
+ "stack": false,
+ "steppedLine": false,
+ "targets": [
+ {
+ "exemplar": true,
+ "expr": "sum(rate(juicefs_compact_size_histogram_bytes_count{vol_name=\"$name\"}[1m])) by (node,mp)",
+ "format": "time_series",
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "{{node}}:{{mp}}",
+ "refId": "A"
+ }
+ ],
+ "thresholds": [],
+ "timeFrom": null,
+ "timeRegions": [],
+ "timeShift": null,
+ "title": "Compaction",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": []
+ },
+ "yaxes": [
+ {
+ "$$hashKey": "object:1080",
+ "format": "percent",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ },
+ {
+ "$$hashKey": "object:1081",
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": false
+ }
+ ],
+ "yaxis": {
+ "align": false,
+ "alignLevel": null
+ }
+ },
+ {
+ "aliasColors": {},
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "fieldConfig": {
+ "defaults": {
+ "unit": "percent"
+ },
+ "overrides": []
+ },
+ "fill": 1,
+ "fillGradient": 0,
+ "gridPos": {
+ "h": 6,
+ "w": 8,
+ "x": 8,
+ "y": 36
+ },
+ "hiddenSeries": false,
+ "id": 26,
+ "legend": {
+ "avg": false,
+ "current": true,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": true
+ },
+ "lines": true,
+ "linewidth": 1,
+ "links": [],
+ "nullPointMode": "null",
+ "options": {
+ "alertThreshold": true
+ },
+ "percentage": false,
+ "pluginVersion": "8.0.6",
+ "pointradius": 2,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [],
+ "spaceLength": 10,
+ "stack": false,
+ "steppedLine": false,
+ "targets": [
+ {
+ "exemplar": true,
+ "expr": "sum(rate(juicefs_compact_size_histogram_bytes_sum{vol_name=\"$name\"}[1m])) by (node,mp)",
+ "format": "time_series",
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "{{node}}:{{mp}}",
+ "refId": "A"
+ }
+ ],
+ "thresholds": [],
+ "timeFrom": null,
+ "timeRegions": [],
+ "timeShift": null,
+ "title": "Compacted Data",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": []
+ },
+ "yaxes": [
+ {
+ "format": "percent",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ },
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": false
+ }
+ ],
+ "yaxis": {
+ "align": false,
+ "alignLevel": null
+ }
+ },
+ {
+ "aliasColors": {},
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "description": "",
+ "fieldConfig": {
+ "defaults": {
+ "unit": "short"
+ },
+ "overrides": []
+ },
+ "fill": 1,
+ "fillGradient": 0,
+ "gridPos": {
+ "h": 6,
+ "w": 8,
+ "x": 16,
+ "y": 36
+ },
+ "hiddenSeries": false,
+ "id": 27,
+ "legend": {
+ "avg": false,
+ "current": true,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": true
+ },
+ "lines": true,
+ "linewidth": 1,
+ "links": [],
+ "nullPointMode": "null",
+ "options": {
+ "alertThreshold": true
+ },
+ "percentage": false,
+ "pluginVersion": "8.0.6",
+ "pointradius": 2,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [],
+ "spaceLength": 10,
+ "stack": false,
+ "steppedLine": false,
+ "targets": [
+ {
+ "exemplar": true,
+ "expr": "sum(juicefs_fuse_open_handlers{vol_name=\"$name\"}) by (node,mp)",
+ "format": "time_series",
+ "interval": "",
+ "intervalFactor": 1,
+ "legendFormat": "{{node}}:{{mp}}",
+ "refId": "A"
+ }
+ ],
+ "thresholds": [],
+ "timeFrom": null,
+ "timeRegions": [],
+ "timeShift": null,
+ "title": "Open File Handlers",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": []
+ },
+ "yaxes": [
+ {
+ "$$hashKey": "object:921",
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ },
+ {
+ "$$hashKey": "object:922",
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": false
+ }
+ ],
+ "yaxis": {
+ "align": false,
+ "alignLevel": null
+ }
+ }
+ ],
+ "refresh": "",
+ "schemaVersion": 30,
+ "style": "dark",
+ "tags": [],
+ "templating": {
+ "list": [
+ {
+ "current": {
+ "selected": true,
+ "text": "juicefs",
+ "value": "juicefs"
+ },
+ "hide": 0,
+ "label": null,
+ "multi": false,
+ "name": "datasource",
+ "options": [],
+ "query": "prometheus",
+ "refresh": 1,
+ "regex": "",
+ "skipUrlSync": false,
+ "type": "datasource"
+ },
+ {
+ "allValue": null,
+ "current": {},
+ "datasource": "${datasource}",
+ "definition": "label_values(juicefs_uptime, vol_name)",
+ "description": null,
+ "error": null,
+ "hide": 0,
+ "includeAll": false,
+ "label": null,
+ "multi": false,
+ "name": "name",
+ "options": [],
+ "query": {
+ "query": "label_values(juicefs_uptime, vol_name)",
+ "refId": "StandardVariableQuery"
+ },
+ "refresh": 1,
+ "regex": "",
+ "skipUrlSync": false,
+ "sort": 0,
+ "type": "query"
+ }
+ ]
+ },
+ "time": {
+ "from": "now-1h",
+ "to": "now"
+ },
+ "timepicker": {
+ "refresh_intervals": [
+ "30s",
+ "1m",
+ "5m",
+ "15m",
+ "30m",
+ "1h",
+ "2h",
+ "1d"
+ ],
+ "time_options": [
+ "5m",
+ "15m",
+ "1h",
+ "6h",
+ "12h",
+ "24h",
+ "2d",
+ "7d",
+ "30d"
+ ]
+ },
+ "timezone": "",
+ "title": "JuiceFS Dashboard",
+ "uid": "-hm07csGk",
+ "version": 3
+}
diff --git a/docs/en/images/baiduyun.png b/docs/en/images/baiduyun.png
new file mode 100644
index 0000000..a8c4a5e
Binary files /dev/null and b/docs/en/images/baiduyun.png differ
diff --git a/docs/en/images/baoyinxiaofei.png b/docs/en/images/baoyinxiaofei.png
new file mode 100644
index 0000000..8cc54cb
Binary files /dev/null and b/docs/en/images/baoyinxiaofei.png differ
diff --git a/docs/en/images/bench-guide-bench.png b/docs/en/images/bench-guide-bench.png
new file mode 100644
index 0000000..e079ad5
Binary files /dev/null and b/docs/en/images/bench-guide-bench.png differ
diff --git a/docs/en/images/bench-guide-profile.png b/docs/en/images/bench-guide-profile.png
new file mode 100644
index 0000000..2656bdf
Binary files /dev/null and b/docs/en/images/bench-guide-profile.png differ
diff --git a/docs/en/images/bench-guide-stats.png b/docs/en/images/bench-guide-stats.png
new file mode 100644
index 0000000..2ea225f
Binary files /dev/null and b/docs/en/images/bench-guide-stats.png differ
diff --git a/docs/en/images/bigo.png b/docs/en/images/bigo.png
new file mode 100644
index 0000000..8beea05
Binary files /dev/null and b/docs/en/images/bigo.png differ
diff --git a/docs/en/images/cos-bucket-url.png b/docs/en/images/cos-bucket-url.png
new file mode 100644
index 0000000..d8298c4
Binary files /dev/null and b/docs/en/images/cos-bucket-url.png differ
diff --git a/docs/en/images/digitalocean-redis-guide.png b/docs/en/images/digitalocean-redis-guide.png
new file mode 100644
index 0000000..00d410f
Binary files /dev/null and b/docs/en/images/digitalocean-redis-guide.png differ
diff --git a/docs/en/images/digitalocean-redis-url.png b/docs/en/images/digitalocean-redis-url.png
new file mode 100644
index 0000000..8c41f67
Binary files /dev/null and b/docs/en/images/digitalocean-redis-url.png differ
diff --git a/docs/en/images/dingdong.png b/docs/en/images/dingdong.png
new file mode 100644
index 0000000..b362f00
Binary files /dev/null and b/docs/en/images/dingdong.png differ
diff --git a/docs/en/images/encryption.png b/docs/en/images/encryption.png
new file mode 100644
index 0000000..a285b19
Binary files /dev/null and b/docs/en/images/encryption.png differ
diff --git a/docs/en/images/grafana_dashboard.png b/docs/en/images/grafana_dashboard.png
new file mode 100644
index 0000000..c1074c1
Binary files /dev/null and b/docs/en/images/grafana_dashboard.png differ
diff --git a/docs/en/images/hangtianhongtu.png b/docs/en/images/hangtianhongtu.png
new file mode 100644
index 0000000..8294587
Binary files /dev/null and b/docs/en/images/hangtianhongtu.png differ
diff --git a/docs/en/images/how-juicefs-stores-files-new.png b/docs/en/images/how-juicefs-stores-files-new.png
new file mode 100644
index 0000000..cb09bad
Binary files /dev/null and b/docs/en/images/how-juicefs-stores-files-new.png differ
diff --git a/docs/en/images/how-juicefs-stores-files-redis.png b/docs/en/images/how-juicefs-stores-files-redis.png
new file mode 100644
index 0000000..df29721
Binary files /dev/null and b/docs/en/images/how-juicefs-stores-files-redis.png differ
diff --git a/docs/en/images/how-juicefs-stores-files.png b/docs/en/images/how-juicefs-stores-files.png
new file mode 100644
index 0000000..12b94c0
Binary files /dev/null and b/docs/en/images/how-juicefs-stores-files.png differ
diff --git a/docs/en/images/internals-read.png b/docs/en/images/internals-read.png
new file mode 100644
index 0000000..48ca6a5
Binary files /dev/null and b/docs/en/images/internals-read.png differ
diff --git a/docs/en/images/internals-stats.png b/docs/en/images/internals-stats.png
new file mode 100644
index 0000000..8f026e8
Binary files /dev/null and b/docs/en/images/internals-stats.png differ
diff --git a/docs/en/images/internals-write.png b/docs/en/images/internals-write.png
new file mode 100644
index 0000000..1936a60
Binary files /dev/null and b/docs/en/images/internals-write.png differ
diff --git a/docs/en/images/juicefs-aliyun.png b/docs/en/images/juicefs-aliyun.png
new file mode 100644
index 0000000..7d2c340
Binary files /dev/null and b/docs/en/images/juicefs-aliyun.png differ
diff --git a/docs/en/images/juicefs-arch-new.png b/docs/en/images/juicefs-arch-new.png
new file mode 100644
index 0000000..cf065c0
Binary files /dev/null and b/docs/en/images/juicefs-arch-new.png differ
diff --git a/docs/en/images/juicefs-arch.png b/docs/en/images/juicefs-arch.png
new file mode 100644
index 0000000..b36611f
Binary files /dev/null and b/docs/en/images/juicefs-arch.png differ
diff --git a/docs/en/images/juicefs-bench.png b/docs/en/images/juicefs-bench.png
new file mode 100644
index 0000000..63739c2
Binary files /dev/null and b/docs/en/images/juicefs-bench.png differ
diff --git a/docs/en/images/juicefs-logo.png b/docs/en/images/juicefs-logo.png
new file mode 100644
index 0000000..4b769b3
Binary files /dev/null and b/docs/en/images/juicefs-logo.png differ
diff --git a/docs/en/images/juicefs-on-windows-new.png b/docs/en/images/juicefs-on-windows-new.png
new file mode 100644
index 0000000..e1f1979
Binary files /dev/null and b/docs/en/images/juicefs-on-windows-new.png differ
diff --git a/docs/en/images/juicefs-on-windows.png b/docs/en/images/juicefs-on-windows.png
new file mode 100644
index 0000000..427fe74
Binary files /dev/null and b/docs/en/images/juicefs-on-windows.png differ
diff --git a/docs/en/images/juicefs-profiling.gif b/docs/en/images/juicefs-profiling.gif
new file mode 100644
index 0000000..3db752f
Binary files /dev/null and b/docs/en/images/juicefs-profiling.gif differ
diff --git a/docs/en/images/juicefs-qcloud.png b/docs/en/images/juicefs-qcloud.png
new file mode 100644
index 0000000..f088029
Binary files /dev/null and b/docs/en/images/juicefs-qcloud.png differ
diff --git a/docs/en/images/juicefs-s3-gateway-arch.png b/docs/en/images/juicefs-s3-gateway-arch.png
new file mode 100644
index 0000000..36511e5
Binary files /dev/null and b/docs/en/images/juicefs-s3-gateway-arch.png differ
diff --git a/docs/en/images/juicefs-storage-format-new.png b/docs/en/images/juicefs-storage-format-new.png
new file mode 100644
index 0000000..684d399
Binary files /dev/null and b/docs/en/images/juicefs-storage-format-new.png differ
diff --git a/docs/en/images/juicefs-storage-format.png b/docs/en/images/juicefs-storage-format.png
new file mode 100644
index 0000000..adfe433
Binary files /dev/null and b/docs/en/images/juicefs-storage-format.png differ
diff --git a/docs/en/images/juicefs_stats_watcher.png b/docs/en/images/juicefs_stats_watcher.png
new file mode 100644
index 0000000..5b0bb21
Binary files /dev/null and b/docs/en/images/juicefs_stats_watcher.png differ
diff --git a/docs/en/images/k3s-nginx-welcome.png b/docs/en/images/k3s-nginx-welcome.png
new file mode 100644
index 0000000..15e6ae6
Binary files /dev/null and b/docs/en/images/k3s-nginx-welcome.png differ
diff --git a/docs/en/images/kubesphere_app_shop.png b/docs/en/images/kubesphere_app_shop.png
new file mode 100644
index 0000000..9796282
Binary files /dev/null and b/docs/en/images/kubesphere_app_shop.png differ
diff --git a/docs/en/images/kubesphere_app_shop_en.png b/docs/en/images/kubesphere_app_shop_en.png
new file mode 100644
index 0000000..7462d66
Binary files /dev/null and b/docs/en/images/kubesphere_app_shop_en.png differ
diff --git a/docs/en/images/kubesphere_app_template.png b/docs/en/images/kubesphere_app_template.png
new file mode 100644
index 0000000..69abc48
Binary files /dev/null and b/docs/en/images/kubesphere_app_template.png differ
diff --git a/docs/en/images/kubesphere_app_template_en.png b/docs/en/images/kubesphere_app_template_en.png
new file mode 100644
index 0000000..64c9360
Binary files /dev/null and b/docs/en/images/kubesphere_app_template_en.png differ
diff --git a/docs/en/images/kubesphere_create_minio.png b/docs/en/images/kubesphere_create_minio.png
new file mode 100644
index 0000000..ed02a25
Binary files /dev/null and b/docs/en/images/kubesphere_create_minio.png differ
diff --git a/docs/en/images/kubesphere_create_minio_en.png b/docs/en/images/kubesphere_create_minio_en.png
new file mode 100644
index 0000000..b8f6b9c
Binary files /dev/null and b/docs/en/images/kubesphere_create_minio_en.png differ
diff --git a/docs/en/images/kubesphere_create_secret.png b/docs/en/images/kubesphere_create_secret.png
new file mode 100644
index 0000000..3aa88fd
Binary files /dev/null and b/docs/en/images/kubesphere_create_secret.png differ
diff --git a/docs/en/images/kubesphere_deployment.png b/docs/en/images/kubesphere_deployment.png
new file mode 100644
index 0000000..db07a06
Binary files /dev/null and b/docs/en/images/kubesphere_deployment.png differ
diff --git a/docs/en/images/kubesphere_deployment_en.png b/docs/en/images/kubesphere_deployment_en.png
new file mode 100644
index 0000000..c5e4e92
Binary files /dev/null and b/docs/en/images/kubesphere_deployment_en.png differ
diff --git a/docs/en/images/kubesphere_install_csi.png b/docs/en/images/kubesphere_install_csi.png
new file mode 100644
index 0000000..6f864fc
Binary files /dev/null and b/docs/en/images/kubesphere_install_csi.png differ
diff --git a/docs/en/images/kubesphere_install_csi_en.png b/docs/en/images/kubesphere_install_csi_en.png
new file mode 100644
index 0000000..cb92805
Binary files /dev/null and b/docs/en/images/kubesphere_install_csi_en.png differ
diff --git a/docs/en/images/kubesphere_minio.png b/docs/en/images/kubesphere_minio.png
new file mode 100644
index 0000000..4812423
Binary files /dev/null and b/docs/en/images/kubesphere_minio.png differ
diff --git a/docs/en/images/kubesphere_minio_en.png b/docs/en/images/kubesphere_minio_en.png
new file mode 100644
index 0000000..46c8a48
Binary files /dev/null and b/docs/en/images/kubesphere_minio_en.png differ
diff --git a/docs/en/images/kubesphere_org_space.png b/docs/en/images/kubesphere_org_space.png
new file mode 100644
index 0000000..fb68089
Binary files /dev/null and b/docs/en/images/kubesphere_org_space.png differ
diff --git a/docs/en/images/kubesphere_pod.png b/docs/en/images/kubesphere_pod.png
new file mode 100644
index 0000000..030b4b1
Binary files /dev/null and b/docs/en/images/kubesphere_pod.png differ
diff --git a/docs/en/images/kubesphere_pod_en.png b/docs/en/images/kubesphere_pod_en.png
new file mode 100644
index 0000000..1173b11
Binary files /dev/null and b/docs/en/images/kubesphere_pod_en.png differ
diff --git a/docs/en/images/kubesphere_pvc.png b/docs/en/images/kubesphere_pvc.png
new file mode 100644
index 0000000..79a1a41
Binary files /dev/null and b/docs/en/images/kubesphere_pvc.png differ
diff --git a/docs/en/images/kubesphere_pvc_en.png b/docs/en/images/kubesphere_pvc_en.png
new file mode 100644
index 0000000..27502b9
Binary files /dev/null and b/docs/en/images/kubesphere_pvc_en.png differ
diff --git a/docs/en/images/kubesphere_redis.png b/docs/en/images/kubesphere_redis.png
new file mode 100644
index 0000000..ab39e7c
Binary files /dev/null and b/docs/en/images/kubesphere_redis.png differ
diff --git a/docs/en/images/kubesphere_redis_en.png b/docs/en/images/kubesphere_redis_en.png
new file mode 100644
index 0000000..947dd60
Binary files /dev/null and b/docs/en/images/kubesphere_redis_en.png differ
diff --git a/docs/en/images/kubesphere_sc_create.png b/docs/en/images/kubesphere_sc_create.png
new file mode 100644
index 0000000..6f725e2
Binary files /dev/null and b/docs/en/images/kubesphere_sc_create.png differ
diff --git a/docs/en/images/kubesphere_sc_create_en.png b/docs/en/images/kubesphere_sc_create_en.png
new file mode 100644
index 0000000..1b8bc38
Binary files /dev/null and b/docs/en/images/kubesphere_sc_create_en.png differ
diff --git a/docs/en/images/kubesphere_sc_update.png b/docs/en/images/kubesphere_sc_update.png
new file mode 100644
index 0000000..e05e217
Binary files /dev/null and b/docs/en/images/kubesphere_sc_update.png differ
diff --git a/docs/en/images/kubesphere_sc_update_en.png b/docs/en/images/kubesphere_sc_update_en.png
new file mode 100644
index 0000000..025d461
Binary files /dev/null and b/docs/en/images/kubesphere_sc_update_en.png differ
diff --git a/docs/en/images/kubesphere_secret_en.png b/docs/en/images/kubesphere_secret_en.png
new file mode 100644
index 0000000..0a5eacf
Binary files /dev/null and b/docs/en/images/kubesphere_secret_en.png differ
diff --git a/docs/en/images/kubesphere_shop_juicefs_en.png b/docs/en/images/kubesphere_shop_juicefs_en.png
new file mode 100644
index 0000000..db5e7a8
Binary files /dev/null and b/docs/en/images/kubesphere_shop_juicefs_en.png differ
diff --git a/docs/en/images/kubesphere_update_csi.png b/docs/en/images/kubesphere_update_csi.png
new file mode 100644
index 0000000..203c9a4
Binary files /dev/null and b/docs/en/images/kubesphere_update_csi.png differ
diff --git a/docs/en/images/kubesphere_update_csi_en.png b/docs/en/images/kubesphere_update_csi_en.png
new file mode 100644
index 0000000..8ca15e2
Binary files /dev/null and b/docs/en/images/kubesphere_update_csi_en.png differ
diff --git a/docs/en/images/kubesphere_update_secret.png b/docs/en/images/kubesphere_update_secret.png
new file mode 100644
index 0000000..c19e630
Binary files /dev/null and b/docs/en/images/kubesphere_update_secret.png differ
diff --git a/docs/en/images/kubesphere_update_secret_en.png b/docs/en/images/kubesphere_update_secret_en.png
new file mode 100644
index 0000000..02a9bbb
Binary files /dev/null and b/docs/en/images/kubesphere_update_secret_en.png differ
diff --git a/docs/en/images/kubesphere_workload.png b/docs/en/images/kubesphere_workload.png
new file mode 100644
index 0000000..abf6957
Binary files /dev/null and b/docs/en/images/kubesphere_workload.png differ
diff --git a/docs/en/images/kubesphere_workload_en.png b/docs/en/images/kubesphere_workload_en.png
new file mode 100644
index 0000000..519a7d2
Binary files /dev/null and b/docs/en/images/kubesphere_workload_en.png differ
diff --git a/docs/en/images/lixiang.png b/docs/en/images/lixiang.png
new file mode 100644
index 0000000..1714573
Binary files /dev/null and b/docs/en/images/lixiang.png differ
diff --git a/docs/en/images/meta-auto-backup-list.png b/docs/en/images/meta-auto-backup-list.png
new file mode 100644
index 0000000..eeaa384
Binary files /dev/null and b/docs/en/images/meta-auto-backup-list.png differ
diff --git a/docs/en/images/metadata-benchmark.svg b/docs/en/images/metadata-benchmark.svg
new file mode 100644
index 0000000..83273aa
--- /dev/null
+++ b/docs/en/images/metadata-benchmark.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/docs/en/images/mi.png b/docs/en/images/mi.png
new file mode 100644
index 0000000..d8bc210
Binary files /dev/null and b/docs/en/images/mi.png differ
diff --git a/docs/en/images/minio-browser.png b/docs/en/images/minio-browser.png
new file mode 100644
index 0000000..11d2dcd
Binary files /dev/null and b/docs/en/images/minio-browser.png differ
diff --git a/docs/en/images/pv-on-juicefs.png b/docs/en/images/pv-on-juicefs.png
new file mode 100644
index 0000000..87d57a0
Binary files /dev/null and b/docs/en/images/pv-on-juicefs.png differ
diff --git a/docs/en/images/qcloud-redis-network.png b/docs/en/images/qcloud-redis-network.png
new file mode 100644
index 0000000..17e328c
Binary files /dev/null and b/docs/en/images/qcloud-redis-network.png differ
diff --git a/docs/en/images/qcloud.png b/docs/en/images/qcloud.png
new file mode 100644
index 0000000..0605b23
Binary files /dev/null and b/docs/en/images/qcloud.png differ
diff --git a/docs/en/images/rancher-chart-info.jpg b/docs/en/images/rancher-chart-info.jpg
new file mode 100644
index 0000000..95abb78
Binary files /dev/null and b/docs/en/images/rancher-chart-info.jpg differ
diff --git a/docs/en/images/rancher-chart-installed.jpg b/docs/en/images/rancher-chart-installed.jpg
new file mode 100644
index 0000000..63334f4
Binary files /dev/null and b/docs/en/images/rancher-chart-installed.jpg differ
diff --git a/docs/en/images/rancher-chart-search.jpg b/docs/en/images/rancher-chart-search.jpg
new file mode 100644
index 0000000..aac3ce1
Binary files /dev/null and b/docs/en/images/rancher-chart-search.jpg differ
diff --git a/docs/en/images/rancher-cluster-create.jpg b/docs/en/images/rancher-cluster-create.jpg
new file mode 100644
index 0000000..ecbcbff
Binary files /dev/null and b/docs/en/images/rancher-cluster-create.jpg differ
diff --git a/docs/en/images/rancher-cluster-options.jpg b/docs/en/images/rancher-cluster-options.jpg
new file mode 100644
index 0000000..864753d
Binary files /dev/null and b/docs/en/images/rancher-cluster-options.jpg differ
diff --git a/docs/en/images/rancher-clusters.jpg b/docs/en/images/rancher-clusters.jpg
new file mode 100644
index 0000000..7a0075d
Binary files /dev/null and b/docs/en/images/rancher-clusters.jpg differ
diff --git a/docs/en/images/rancher-new-repo.jpg b/docs/en/images/rancher-new-repo.jpg
new file mode 100644
index 0000000..6a646ae
Binary files /dev/null and b/docs/en/images/rancher-new-repo.jpg differ
diff --git a/docs/en/images/rancher-pvc.jpg b/docs/en/images/rancher-pvc.jpg
new file mode 100644
index 0000000..6c68d6b
Binary files /dev/null and b/docs/en/images/rancher-pvc.jpg differ
diff --git a/docs/en/images/rancher-repos.jpg b/docs/en/images/rancher-repos.jpg
new file mode 100644
index 0000000..ab126b6
Binary files /dev/null and b/docs/en/images/rancher-repos.jpg differ
diff --git a/docs/en/images/rancher-welcome.jpeg b/docs/en/images/rancher-welcome.jpeg
new file mode 100644
index 0000000..bd77dd1
Binary files /dev/null and b/docs/en/images/rancher-welcome.jpeg differ
diff --git a/docs/en/images/repo-diagram.svg b/docs/en/images/repo-diagram.svg
new file mode 100644
index 0000000..9cde737
--- /dev/null
+++ b/docs/en/images/repo-diagram.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/docs/en/images/s3-gateway-file-manager.jpg b/docs/en/images/s3-gateway-file-manager.jpg
new file mode 100644
index 0000000..e5b862f
Binary files /dev/null and b/docs/en/images/s3-gateway-file-manager.jpg differ
diff --git a/docs/en/images/s3ql-bin.jpg b/docs/en/images/s3ql-bin.jpg
new file mode 100644
index 0000000..3397f19
Binary files /dev/null and b/docs/en/images/s3ql-bin.jpg differ
diff --git a/docs/en/images/sequential-read-write-benchmark.svg b/docs/en/images/sequential-read-write-benchmark.svg
new file mode 100644
index 0000000..4826a70
--- /dev/null
+++ b/docs/en/images/sequential-read-write-benchmark.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/docs/en/images/sf.png b/docs/en/images/sf.png
new file mode 100644
index 0000000..dc7a51f
Binary files /dev/null and b/docs/en/images/sf.png differ
diff --git a/docs/en/images/shopee.png b/docs/en/images/shopee.png
new file mode 100644
index 0000000..51f50e1
Binary files /dev/null and b/docs/en/images/shopee.png differ
diff --git a/docs/en/images/spark_ql_orc.png b/docs/en/images/spark_ql_orc.png
new file mode 100644
index 0000000..1af8b40
Binary files /dev/null and b/docs/en/images/spark_ql_orc.png differ
diff --git a/docs/en/images/spark_sql_parquet.png b/docs/en/images/spark_sql_parquet.png
new file mode 100644
index 0000000..1ebe06f
Binary files /dev/null and b/docs/en/images/spark_sql_parquet.png differ
diff --git a/docs/en/images/sqlite-info.png b/docs/en/images/sqlite-info.png
new file mode 100644
index 0000000..24f8832
Binary files /dev/null and b/docs/en/images/sqlite-info.png differ
diff --git a/docs/en/images/sqlite-mount-local.png b/docs/en/images/sqlite-mount-local.png
new file mode 100644
index 0000000..90fe649
Binary files /dev/null and b/docs/en/images/sqlite-mount-local.png differ
diff --git a/docs/en/images/windows-mount-startup.png b/docs/en/images/windows-mount-startup.png
new file mode 100644
index 0000000..5bd57b4
Binary files /dev/null and b/docs/en/images/windows-mount-startup.png differ
diff --git a/docs/en/images/windows-path-en.png b/docs/en/images/windows-path-en.png
new file mode 100644
index 0000000..ec57747
Binary files /dev/null and b/docs/en/images/windows-path-en.png differ
diff --git a/docs/en/images/windows-path.png b/docs/en/images/windows-path.png
new file mode 100644
index 0000000..3eef636
Binary files /dev/null and b/docs/en/images/windows-path.png differ
diff --git a/docs/en/images/windows-run-startup.png b/docs/en/images/windows-run-startup.png
new file mode 100644
index 0000000..e46be46
Binary files /dev/null and b/docs/en/images/windows-run-startup.png differ
diff --git a/docs/en/images/wsl/access-jfs-from-win-en.png b/docs/en/images/wsl/access-jfs-from-win-en.png
new file mode 100644
index 0000000..ca91998
Binary files /dev/null and b/docs/en/images/wsl/access-jfs-from-win-en.png differ
diff --git a/docs/en/images/wsl/access-jfs-from-win.png b/docs/en/images/wsl/access-jfs-from-win.png
new file mode 100644
index 0000000..6a55698
Binary files /dev/null and b/docs/en/images/wsl/access-jfs-from-win.png differ
diff --git a/docs/en/images/wsl/init.png b/docs/en/images/wsl/init.png
new file mode 100644
index 0000000..3c1a862
Binary files /dev/null and b/docs/en/images/wsl/init.png differ
diff --git a/docs/en/images/wsl/mount-point.png b/docs/en/images/wsl/mount-point.png
new file mode 100644
index 0000000..ff69305
Binary files /dev/null and b/docs/en/images/wsl/mount-point.png differ
diff --git a/docs/en/images/wsl/startmenu-en.png b/docs/en/images/wsl/startmenu-en.png
new file mode 100644
index 0000000..22c901d
Binary files /dev/null and b/docs/en/images/wsl/startmenu-en.png differ
diff --git a/docs/en/images/wsl/startmenu.png b/docs/en/images/wsl/startmenu.png
new file mode 100644
index 0000000..4831c2d
Binary files /dev/null and b/docs/en/images/wsl/startmenu.png differ
diff --git a/docs/en/images/wsl/windows-to-linux-en.png b/docs/en/images/wsl/windows-to-linux-en.png
new file mode 100644
index 0000000..48cf62f
Binary files /dev/null and b/docs/en/images/wsl/windows-to-linux-en.png differ
diff --git a/docs/en/images/wsl/windows-to-linux.png b/docs/en/images/wsl/windows-to-linux.png
new file mode 100644
index 0000000..6dba6ac
Binary files /dev/null and b/docs/en/images/wsl/windows-to-linux.png differ
diff --git a/docs/en/images/wsl/winver-en.png b/docs/en/images/wsl/winver-en.png
new file mode 100644
index 0000000..2459239
Binary files /dev/null and b/docs/en/images/wsl/winver-en.png differ
diff --git a/docs/en/images/wsl/winver.png b/docs/en/images/wsl/winver.png
new file mode 100644
index 0000000..0a9a282
Binary files /dev/null and b/docs/en/images/wsl/winver.png differ
diff --git a/docs/en/images/wsl/zone-identifier-en.png b/docs/en/images/wsl/zone-identifier-en.png
new file mode 100644
index 0000000..491625a
Binary files /dev/null and b/docs/en/images/wsl/zone-identifier-en.png differ
diff --git a/docs/en/images/wsl/zone-identifier.png b/docs/en/images/wsl/zone-identifier.png
new file mode 100644
index 0000000..51d1881
Binary files /dev/null and b/docs/en/images/wsl/zone-identifier.png differ
diff --git a/docs/en/introduction/_case.md b/docs/en/introduction/_case.md
new file mode 100644
index 0000000..56e8f50
--- /dev/null
+++ b/docs/en/introduction/_case.md
@@ -0,0 +1,39 @@
+---
+sidebar_label: Use Scenarios & Limits
+sidebar_position: 2
+slug: /case
+---
+# JuiceFS Use Scenarios & Limits
+
+JuiceFS is widely applicable to various data storage and sharing scenarios. JuiceFS application cases from all over the world are summarized here. All community users are welcome to maintain this case list.
+
+## Data backup and recovery
+
+- [JuiceFS for archive NGINX logs](https://juicefs.com/docs/en/archive_nginx_log_in_juicefs.html)
+- [JuiceFS for MySQL backup, verification and recovery](https://juicefs.com/docs/en/backup_mysql_in_juicefs.html)
+- [Customer Stories: Xiachufang MySQL backup practice on JuiceFS](https://juicefs.com/blog/en/posts/xiachufang-mysql-backup-practice-on-juicefs/)
+
+## Big Data
+
+- [How to effectively reduce the load of HDFS cluster for Qutoutiao(NASDAQ:QTT)](https://juicefs.com/blog/en/posts/qutoutiao-big-data-platform-user-case/)
+- [How does the Globalegrow data platform achieve both speed and money savings?](https://juicefs.com/blog/en/posts/globalegrow-big-data-platform-user-case/)
+- [How to make HBase faster, more stable, and cheaper](https://juicefs.com/blog/en/posts/how-to-make-hbase-faster-more-stable-and-cheaper/)
+- [Exploring storage and computing separation for ClickHouse](https://juicefs.com/blog/en/posts/clickhouse-disaggregated-storage-and-compute-practice/)
+
+## Data sharing
+
+- [Building a Milvus Cluster Based on JuiceFS](https://juicefs.com/blog/en/posts/build-milvus-distributed-cluster-based-on-juicefs/)
+
+
+## Contribution
+
+If you want to add JuiceFS application cases to this list, you can apply in the following ways:
+
+### 1. GitHub contribution
+
+You can create a branch of this repository through GitHub, add the titile and URL of your case page to the corresponding category, create a Pull Request, and wait for review and branch merging.
+
+### 2. Social media
+
+You can join JuiceFS official [Slack channel](https://juicefs.slack.com/), any staff member can contact contribution matters.
+
diff --git a/docs/en/introduction/architecture.md b/docs/en/introduction/architecture.md
new file mode 100644
index 0000000..3025ac1
--- /dev/null
+++ b/docs/en/introduction/architecture.md
@@ -0,0 +1,44 @@
+---
+sidebar_label: Architecture
+sidebar_position: 2
+slug: /architecture
+---
+
+# Architecture
+
+The JuiceFS file system consists of three parts:
+
+1. **JuiceFS Client**: coordinating object storage and metadata engine, and implementation of file system interfaces such as POSIX, Hadoop, Kubernetes CSI Driver, S3 Gateway, etc..
+2. **Data Storage**: storage of the data itself, supporting media such as local disk, public or private cloud object storage, HDFS, etc.
+3. **Metadata Engine**: storage data corresponding metadata contains file name, file size, permission group, creation and modification time and directory structure, etc., supporting Redis, MySQL, TiKV and other engines.
+
+![image](../images/juicefs-arch-new.png)
+
+As a file system, JuiceFS handles the data and its corresponding metadata separately, with the data being stored in the object store and the metadata being stored in the metadata engine.
+
+In terms of **data storage**, JuiceFS supports almost all public cloud object stores, as well as OpenStack Swift, Ceph, MinIO and other open source object stores that support private deployments.
+
+In terms of **metadata storage**, JuiceFS is designed with multiple engines, and currently supports Redis, TiKV, MySQL/MariaDB, PostgreSQL, SQLite, etc. as metadata service engines, and will implement more multiple data storage engines one after another. Welcome to [Submit Issue](https://github.com/juicedata/juicefs/issues) to feedback your requirements.
+
+In terms of **File System Interface** implementation:
+
+- With **FUSE**, the JuiceFS file system can be mounted to the server in a POSIX-compatible manner to use massive cloud storage directly as local storage.
+- With **Hadoop Java SDK**, JuiceFS file system can directly replace HDFS and provide low-cost mass storage for Hadoop.
+- With the **Kubernetes CSI Driver**, the JuiceFS file system can directly provide mass storage for Kubernetes.
+- With **S3 Gateway**, applications using S3 as the storage layer can directly access the JuiceFS file system and use tools such as AWS CLI, s3cmd, and MinIO client.
+
+## How JuiceFS Stores Files
+
+The file system acts as a medium for interaction between the user and the hard drive, which allows files to be stored on the hard drive properly. As you know, Windows commonly used file systems are FAT32, NTFS, Linux commonly used file systems are Ext4, XFS, Btrfs, etc., each file system has its own unique way of organizing and managing files, which determines the file system Features such as storage capacity and performance.
+
+As a file system, JuiceFS is no exception. Its strong consistency and high performance are inseparable from its unique file management mode.
+
+Unlike the traditional file system that can only use local disks to store data and corresponding metadata, JuiceFS will format the data and store it in object storage (cloud storage), and store the metadata corresponding to the data in databases such as Redis. .
+
+Any file stored in JuiceFS will be split into fixed-size **"Chunk"**, and the default upper limit is 64 MiB. Each Chunk is composed of one or more **"Slice"**. The length of the slice is not fixed, depending on the way the file is written. Each slice will be further split into fixed-size **"Block"**, which is 4 MiB by default. Finally, these blocks will be stored in the object storage. At the same time, JuiceFS will store each file and its Chunks, Slices, Blocks and other metadata information in metadata engines.
+
+![](../images/juicefs-storage-format-new.png)
+
+Using JuiceFS, files will eventually be split into Chunks, Slices and Blocks and stored in object storage. Therefore, you will find that the source files stored in JuiceFS cannot be found in the file browser of the object storage platform. There is a chunks directory and a bunch of digitally numbered directories and files in the bucket. Don't panic, this is the secret of the high-performance operation of the JuiceFS file system!
+
+![How JuiceFS stores your files](../images/how-juicefs-stores-files-new.png)
diff --git a/docs/en/introduction/introduction.md b/docs/en/introduction/introduction.md
new file mode 100644
index 0000000..a8b31ea
--- /dev/null
+++ b/docs/en/introduction/introduction.md
@@ -0,0 +1,65 @@
+---
+title: What is JuiceFS?
+sidebar_label: What is JuiceFS
+sidebar_position: 1
+slug: .
+---
+#
+
+![JuiceFS LOGO](../images/juicefs-logo.png)
+
+**JuiceFS** is a high-performance shared file system designed for cloud-native use and released under the Apache License 2.0. It provides full [POSIX](https://en.wikipedia.org/wiki/POSIX) compatibility, allowing almost all object stores to be used locally as massive local disks, and to be mounted and read on different hosts across platforms and regions at the same time.
+
+JuiceFS implements the distributed design of file system by separating "data" and "metadata" storage architecture. When using JuiceFS to store data, the data itself is persisted in [object storage](../reference/how_to_setup_object_storage.md#supported-object-storage) (e.g., Amazon S3), and the corresponding metadata can be persisted on-demand in various [databases](../reference/how_to_setup_metadata_engine.md) such as Redis, MySQL, TiKV, SQLite, etc.
+
+JuiceFS provides rich APIs for various forms of data management, analysis, archiving, and backup, and can seamlessly interface with big data, machine learning, artificial intelligence and other application platforms without modifying code, providing them with massive, elastic, and low-cost high-performance storage. It allows you to focus on business development and improve R&D efficiency without worrying about availability, disaster recovery, monitoring, and expansion. Make it easier for operations and maintenance teams to transform to DevOps teams.
+
+## Features
+
+1. **POSIX Compatible**: used like a local file system, seamlessly interfacing with existing applications.
+2. **HDFS Compatible**: Full compatibility with the [HDFS API](../deployment/hadoop_java_sdk.md), providing enhanced metadata performance.
+3. **S3 Compatible**: Provides [S3 gateway](../deployment/s3_gateway.md) implementing the S3-compatible access interface.
+4. **Cloud-Native**: Use JuiceFS in Kubernetes easily via [CSI Driver](../deployment/how_to_use_on_kubernetes.md).
+5. **Distributed**: the same file system can be mounted on thousands of servers at the same time, with high performance concurrent reads and writes and shared data.
+6. **Strong Consistency**: confirmed file changes are immediately visible on all servers, ensuring strong consistency.
+7. **Better Performance**: millisecond latency, nearly unlimited throughput (depending on object storage scale), see [performance test results](../benchmark/benchmark.md).
+8. **Data Security**: Supports encryption in transit and encryption at rest, [View Details](../security/encrypt.md).
+9. **File lock**: support for BSD lock (flock) and POSIX lock (fcntl).
+10. **Data Compression**: Supports [LZ4](https://lz4.github.io/lz4) and [Zstandard](https://facebook.github.io/zstd) compression algorithms to save storage space.
+
+## Architecture
+
+The JuiceFS file system consists of three parts:
+
+1. **JuiceFS Client**: coordinating object storage and metadata engine, and implementation of file system interfaces such as POSIX, Hadoop, Kubernetes CSI Driver, S3 Gateway, etc..
+2. **Data Storage**: storage of the data itself, supporting media such as local disk, public or private cloud object storage, HDFS, etc.
+3. **Metadata Engine**: storage data corresponding metadata contains file name, file size, permission group, creation and modification time and directory structure, etc., supporting Redis, MySQL, TiKV and other engines.
+
+![image](../images/juicefs-arch-new.png)
+
+As a file system, JuiceFS handles the data and its corresponding metadata separately, with the data being stored in the object store and the metadata being stored in the metadata engine.
+
+In terms of **data storage**, JuiceFS supports almost all public cloud object stores, as well as OpenStack Swift, Ceph, MinIO and other open source object stores that support private deployments.
+
+In terms of **metadata storage**, JuiceFS is designed with multiple engines, and currently supports Redis, TiKV, MySQL/MariaDB, PostgreSQL, SQLite, etc. as metadata service engines, and will implement more multiple data storage engines one after another. Welcome to [Submit Issue](https://github.com/juicedata/juicefs/issues) to feedback your requirements.
+
+In terms of **File System Interface** implementation:
+
+- With **FUSE**, the JuiceFS file system can be mounted to the server in a POSIX-compatible manner to use massive cloud storage directly as local storage.
+- With **Hadoop Java SDK**, JuiceFS file system can directly replace HDFS and provide low-cost mass storage for Hadoop.
+- With the **Kubernetes CSI Driver**, the JuiceFS file system can directly provide mass storage for Kubernetes.
+- With **S3 Gateway**, applications using S3 as the storage layer can directly access the JuiceFS file system and use tools such as AWS CLI, s3cmd, and MinIO client.
+
+## Scenarios
+
+JuiceFS is designed for massive data storage and can be used as an alternative to many distributed file systems and network file systems, especially for the following scenarios.
+
+- **Big Data Analytics**: HDFS-compatible without any special API intrusion into the business; seamless integration with mainstream computing engines (Spark, Presto, Hive, etc.); infinitely scalable storage space; almost 0 operation and maintenance cost; perfect caching mechanism, several times higher than object storage performance.
+- **Machine Learning**: POSIX compatible, can support all machine learning, deep learning frameworks; sharing capabilities to improve the efficiency of team management, use of data.
+- **Persistent volumes in container clusters**: Kubernetes CSI support; persistent storage and independent from container lifetime; strong consistency to ensure correct data; take over data storage requirements to ensure statelessness of the service.
+- **Shared Workspace**: can be mounted on any host; no client concurrent read/write restrictions; POSIX compatible with existing data flow and scripting operations.
+- **Data Backup**: Back up all kinds of data in unlimited smoothly scalable storage space; combined with the shared mount feature, you can aggregate multi-host data to one place and do unified backup.
+
+## Data Privacy
+
+JuiceFS is open source software, and you can find the full source code at [GitHub](https://github.com/juicedata/juicefs). When using JuiceFS to store data, the data is split into chunks according to certain rules and stored in your own defined object storage or other storage media, and the metadata corresponding to the data is stored in your own defined database.
diff --git a/docs/en/mount_at_boot.md b/docs/en/mount_at_boot.md
new file mode 100644
index 0000000..fc9503a
--- /dev/null
+++ b/docs/en/mount_at_boot.md
@@ -0,0 +1,90 @@
+# Mount JuiceFS at Boot
+
+This is a guide about how to mount JuiceFS automatically at boot.
+
+## Linux
+
+Copy `juicefs` as `/sbin/mount.juicefs`, then edit `/etc/fstab` with following line:
+
+```
+ juicefs _netdev[,] 0 0
+```
+
+The format of `` is `redis://:@:/`, e.g. `redis://localhost:6379/1`. And replace `` with specific path you wanna mount JuiceFS to, e.g. `/jfs`. If you need set [mount options](reference/command_reference.md#juicefs-mount), replace `[,]` with comma separated options list. The following line is an example:
+
+```
+redis://localhost:6379/1 /jfs juicefs _netdev,max-uploads=50,writeback,cache-size=2048 0 0
+```
+
+**Note: By default, CentOS 6 will NOT mount network file system after boot, run following command to enable it:**
+
+```bash
+$ sudo chkconfig --add netfs
+```
+
+## macOS
+
+Create a file named `io.juicefs..plist` under `~/Library/LaunchAgents`. Replace `` with JuiceFS volume name. Add following contents to the file (again, replace `NAME`, `PATH-TO-JUICEFS`, `META-URL` and `MOUNTPOINT` with appropriate value):
+
+```xml
+
+
+
+
+ Label
+ io.juicefs.NAME
+ ProgramArguments
+
+ PATH-TO-JUICEFS
+ mount
+ META-URL
+ MOUNTPOINT
+
+ RunAtLoad
+
+
+
+```
+
+Use following commands to load the file created in the previous step and test whether the loading is successful. **Please ensure Redis server is already running.**
+
+```bash
+$ launchctl load ~/Library/LaunchAgents/io.juicefs..plist
+$ launchctl start ~/Library/LaunchAgents/io.juicefs.
+$ ls
+```
+
+If mount failed, you can add following configuration to `io.juicefs..plist` file for debug purpose:
+
+```xml
+ StandardOutPath
+ /tmp/juicefs.out
+ StandardErrorPath
+ /tmp/juicefs.err
+```
+
+Use following commands to reload the latest configuration and inspect the output:
+
+```bash
+$ launchctl unload ~/Library/LaunchAgents/io.juicefs..plist
+$ launchctl load ~/Library/LaunchAgents/io.juicefs..plist
+$ cat /tmp/juicefs.out
+$ cat /tmp/juicefs.err
+```
+
+If you install Redis server by Homebrew, you could use following command to start it at boot:
+
+```bash
+$ brew services start redis
+```
+
+Then add following configuration to `io.juicefs..plist` file for ensure Redis server is loaded:
+
+```xml
+ KeepAlive
+
+ OtherJobEnabled
+ homebrew.mxcl.redis
+
+```
+
diff --git a/docs/en/reference/command_reference.md b/docs/en/reference/command_reference.md
new file mode 100644
index 0000000..48e260f
--- /dev/null
+++ b/docs/en/reference/command_reference.md
@@ -0,0 +1,738 @@
+---
+sidebar_label: Command Reference
+sidebar_position: 1
+slug: /command_reference
+---
+# Command Reference
+
+There are many commands to help you manage your file system. This page provides a detailed reference for these commands.
+
+## Overview
+
+If you run `juicefs` by itself, it will print all available commands. In addition, you can add `-h/--help` flag after each command to get more information of it.
+
+```bash
+$ juicefs -h
+NAME:
+ juicefs - A POSIX file system built on Redis and object storage.
+
+USAGE:
+ juicefs [global options] command [command options] [arguments...]
+
+VERSION:
+ 1.0-dev (2021-12-27 3462bdbf)
+
+COMMANDS:
+ format format a volume
+ mount mount a volume
+ umount unmount a volume
+ gateway S3-compatible gateway
+ sync sync between two storage
+ rmr remove directories recursively
+ info show internal information for paths or inodes
+ bench run benchmark to read/write/stat big/small files
+ gc collect any leaked objects
+ fsck Check consistency of file system
+ profile analyze access log
+ stats show runtime statistics
+ status show status of JuiceFS
+ warmup build cache for target directories/files
+ dump dump metadata into a JSON file
+ load load metadata from a previously dumped JSON file
+ config change config of a volume
+ destroy destroy an existing volume
+ help, h Shows a list of commands or help for one command
+
+GLOBAL OPTIONS:
+ --verbose, --debug, -v enable debug log (default: false)
+ --quiet, -q only warning and errors (default: false)
+ --trace enable trace log (default: false)
+ --no-agent Disable pprof (:6060) and gops (:6070) agent (default: false)
+ --help, -h show help (default: false)
+ --version, -V print only the version (default: false)
+
+COPYRIGHT:
+ Apache License 2.0
+```
+
+:::note
+If `juicefs` is not placed in your `$PATH`, you should run the script with the path to the script. For example, if `juicefs` is placed in current directory, you should use `./juicefs`. It is recommended to place `juicefs` in your `$PATH` for convenience. You can refer to [Installation & Upgrade](../getting-started/installation.md) for more information.
+:::
+
+:::note
+If the command option is of boolean type, such as `--debug`, there is no need to set any value, just add `--debug` to the command to enable the function, and vice versa to disable it.
+:::
+
+## Auto Completion
+
+:::note
+This feature requires JuiceFS >= 0.15.2. It is implemented based on `github.com/urfave/cli/v2`. You can find more information [here](https://github.com/urfave/cli/blob/master/docs/v2/manual.md#enabling).
+:::
+
+To enable commands completion, simply source the script provided within `hack/autocomplete`. For example:
+
+Bash:
+
+```bash
+source hack/autocomplete/bash_autocomplete
+```
+
+Zsh:
+
+```bash
+source hack/autocomplete/zsh_autocomplete
+```
+
+Please note the auto-completion is only enabled for the current session. If you want it for all new sessions, add the `source` command to `.bashrc` or `.zshrc`:
+
+```bash
+echo "source path/to/bash_autocomplete" >> ~/.bashrc
+```
+
+or
+
+```bash
+echo "source path/to/zsh_autocomplete" >> ~/.zshrc
+```
+
+Alternatively, if you are using bash on a Linux system, you may just copy the script to `/etc/bash_completion.d` and rename it to `juicefs`:
+
+```bash
+sudo cp hack/autocomplete/bash_autocomplete /etc/bash_completion.d/juicefs
+```
+
+```shell
+source /etc/bash_completion.d/juicefs
+```
+
+## Commands
+
+### juicefs format
+
+#### Description
+
+Format a volume. It's the first step for initializing a new file system volume.
+
+#### Synopsis
+
+```
+juicefs format [command options] META-URL NAME
+```
+
+- **META-URL**: Database URL for metadata storage, see "[JuiceFS supported metadata engines](how_to_setup_metadata_engine.md)" for details.
+- **NAME**: the name of the file system
+
+#### Options
+
+`--block-size value`
+size of block in KiB (default: 4096)
+
+`--capacity value`
+the limit for space in GiB (default: unlimited)
+
+`--inodes value`
+the limit for number of inodes (default: unlimited)
+
+`--compress value`
+compression algorithm (lz4, zstd, none) (default: "none")
+
+`--shards value`
+store the blocks into N buckets by hash of key (default: 0)
+
+`--storage value`
+Object storage type (e.g. s3, gcs, oss, cos) (default: "file")
+
+`--bucket value`
+A bucket URL to store data (default: `"$HOME/.juicefs/local"` or `"/var/jfs"`)
+
+`--access-key value`
+Access key for object storage (env `ACCESS_KEY`)
+
+`--secret-key value`
+Secret key for object storage (env `SECRET_KEY`)
+
+`--encrypt-rsa-key value`
+A path to RSA private key (PEM)
+
+`--trash-days value`
+number of days after which removed files will be permanently deleted (default: 1)
+
+`--force`
+overwrite existing format (default: false)
+
+`--no-update`
+don't update existing volume (default: false)
+
+### juicefs mount
+
+#### Description
+
+Mount a volume. The volume shoud be formatted first.
+
+#### Synopsis
+
+```
+juicefs mount [command options] META-URL MOUNTPOINT
+```
+
+- **META-URL**: Database URL for metadata storage, see "[JuiceFS supported metadata engines](how_to_setup_metadata_engine.md)" for details.
+- **MOUNTPOINT**: file system mount point, e.g. `/mnt/jfs`, `Z:`.
+
+#### Options
+
+`--metrics value`
+address to export metrics (default: "127.0.0.1:9567")
+
+`--consul value`
+consul address to register (default: "127.0.0.1:8500")
+
+`--no-usage-report`
+do not send usage report (default: false)
+
+`-d, --background`
+run in background (default: false)
+
+`--no-syslog`
+disable syslog (default: false)
+
+`--log value`
+path of log file when running in background (default: `$HOME/.juicefs/juicefs.log` or `/var/log/juicefs.log`)
+
+`-o value`
+other FUSE options (see [this document](../reference/fuse_mount_options.md) for more information)
+
+`--attr-cache value`
+attributes cache timeout in seconds (default: 1)
+
+`--entry-cache value`
+file entry cache timeout in seconds (default: 1)
+
+`--dir-entry-cache value`
+dir entry cache timeout in seconds (default: 1)
+
+`--enable-xattr`
+enable extended attributes (xattr) (default: false)
+
+`--bucket value`
+customized endpoint to access object store
+
+`--get-timeout value`
+the max number of seconds to download an object (default: 60)
+
+`--put-timeout value`
+the max number of seconds to upload an object (default: 60)
+
+`--io-retries value`
+number of retries after network failure (default: 30)
+
+`--max-uploads value`
+number of connections to upload (default: 20)
+
+`--max-deletes value`
+number of threads to delete objects (default: 2)
+
+`--buffer-size value`
+total read/write buffering in MiB (default: 300)
+
+`--upload-limit value`
+bandwidth limit for upload in Mbps (default: 0)
+
+`--download-limit value`
+bandwidth limit for download in Mbps (default: 0)
+
+`--prefetch value`
+prefetch N blocks in parallel (default: 1)
+
+`--writeback`
+upload objects in background (default: false)
+
+`--cache-dir value`
+directory paths of local cache, use colon to separate multiple paths (default: `"$HOME/.juicefs/cache"` or `"/var/jfsCache"`)
+
+`--cache-size value`
+size of cached objects in MiB (default: 102400)
+
+`--free-space-ratio value`
+min free space (ratio) (default: 0.1)
+
+`--cache-partial-only`
+cache only random/small read (default: false)
+
+`--read-only`
+allow lookup/read operations only (default: false)
+
+`--open-cache value`
+open file cache timeout in seconds (0 means disable this feature) (default: 0)
+
+`--subdir value`
+mount a sub-directory as root (default: "")
+
+### juicefs umount
+
+#### Description
+
+Unmount a volume.
+
+#### Synopsis
+
+```
+juicefs umount [command options] MOUNTPOINT
+```
+
+#### Options
+
+`-f, --force`
+unmount a busy mount point by force (default: false)
+
+### juicefs gateway
+
+#### Description
+
+S3-compatible gateway.
+
+#### Synopsis
+
+```
+juicefs gateway [command options] META-URL ADDRESS
+```
+
+- **META-URL**: Database URL for metadata storage, see "[JuiceFS supported metadata engines](how_to_setup_metadata_engine.md)" for details.
+- **ADDRESS**: S3 gateway address and listening port, for example: `localhost:9000`
+
+#### Options
+
+`--bucket value`
+customized endpoint to access object store
+
+`--get-timeout value`
+the max number of seconds to download an object (default: 60)
+
+`--put-timeout value`
+the max number of seconds to upload an object (default: 60)
+
+`--io-retries value`
+number of retries after network failure (default: 30)
+
+`--max-uploads value`
+number of connections to upload (default: 20)
+
+`--max-deletes value`
+number of threads to delete objects (default: 2)
+
+`--buffer-size value`
+total read/write buffering in MiB (default: 300)
+
+`--upload-limit value`
+bandwidth limit for upload in Mbps (default: 0)
+
+`--download-limit value`
+bandwidth limit for download in Mbps (default: 0)
+
+`--prefetch value`
+prefetch N blocks in parallel (default: 1)
+
+`--writeback`
+upload objects in background (default: false)
+
+`--cache-dir value`
+directory paths of local cache, use colon to separate multiple paths (default: `"$HOME/.juicefs/cache"` or `/var/jfsCache`)
+
+`--cache-size value`
+size of cached objects in MiB (default: 102400)
+
+`--free-space-ratio value`
+min free space (ratio) (default: 0.1)
+
+`--cache-partial-only`
+cache only random/small read (default: false)
+
+`--read-only`
+allow lookup/read operations only (default: false)
+
+`--open-cache value`
+open file cache timeout in seconds (0 means disable this feature) (default: 0)
+
+`--subdir value`
+mount a sub-directory as root (default: "")
+
+`--attr-cache value`
+attributes cache timeout in seconds (default: 1)
+
+`--entry-cache value`
+file entry cache timeout in seconds (default: 0)
+
+`--dir-entry-cache value`
+dir entry cache timeout in seconds (default: 1)
+
+`--access-log value`
+path for JuiceFS access log
+
+`--metrics value`
+address to export metrics (default: "127.0.0.1:9567")
+
+`--no-usage-report`
+do not send usage report (default: false)
+
+`--no-banner`
+disable MinIO startup information (default: false)
+
+`--multi-buckets`
+use top level of directories as buckets (default: false)
+
+`--keep-etag`
+Save the ETag for uploaded objects (default: false)
+
+
+### juicefs sync
+
+#### Description
+
+Sync between two storage.
+
+#### Synopsis
+
+```
+juicefs sync [command options] SRC DST
+```
+
+- **SRC**: source path
+- **DST**: destination path
+
+The format of both the source and destination paths is `[NAME://][ACCESS_KEY:SECRET_KEY@]BUCKET[.ENDPOINT][/PREFIX]`, among the path:
+
+- `NAME`: JuiceFS supported data storage types (e.g. `s3`, `oss`), please refer to [document](how_to_setup_object_storage.md#supported-object-storage).
+- `ACCESS_KEY` and `SECRET_KEY`: The credential required to access the data storage, please refer to [document](how_to_setup_object_storage.md#access-key-and-secret-key).
+- `BUCKET[.ENDPOINT]`: The access address of the data storage service, the format may be different for different storage types, please refer to [document](how_to_setup_object_storage.md#supported-object-storage).
+- `[/PREFIX]`: Optional, a prefix for the source and destination paths that can be used to limit the synchronization to only data in certain paths.
+
+#### Options
+
+`--start KEY, -s KEY`
+the first KEY to sync
+
+`--end KEY, -e KEY`
+the last KEY to sync
+
+`--threads value, -p value`
+number of concurrent threads (default: 10)
+
+`--http-port PORT`
+HTTP PORT to listen to (default: 6070)
+
+`--update, -u`
+update existing file if the source is newer (default: false)
+
+`--force-update, -f`
+always update existing file (default: false)
+
+`--perms`
+preserve permissions (default: false)
+
+`--dirs`
+Sync directories or holders (default: false)
+
+`--dry`
+don't copy file (default: false)
+
+`--delete-src, --deleteSrc`
+delete objects from source after synced (default: false)
+
+`--delete-dst, --deleteDst`
+delete extraneous objects from destination (default: false)
+
+`--exclude PATTERN`
+exclude keys containing PATTERN (POSIX regular expressions)
+
+`--include PATTERN`
+only include keys containing PATTERN (POSIX regular expressions)
+
+`--manager value`
+manager address
+
+`--worker value`
+hosts (seperated by comma) to launch worker
+
+`--bwlimit value`
+limit bandwidth in Mbps (0 means unlimited) (default: 0)
+
+`--no-https`
+do not use HTTPS (default: false)
+
+`--check-all`
+verify integrity of all files in source and destination (default: false)
+
+`--check-new`
+verify integrity of newly copied files (default: false)
+
+### juicefs rmr
+
+#### Description
+
+Remove all files in directories recursively.
+
+#### Synopsis
+
+```
+juicefs rmr PATH ...
+```
+
+### juicefs info
+
+#### Description
+
+Show internal information for given paths or inodes.
+
+#### Synopsis
+
+```
+juicefs info [command options] PATH or INODE
+```
+
+#### Options
+
+`--inode, -i`
+use inode instead of path (current dir should be inside JuiceFS) (default: false)
+
+`--recursive, -r`
+get summary of directories recursively (NOTE: it may take a long time for huge trees) (default: false)
+
+### juicefs bench
+
+#### Description
+
+Run benchmark, include read/write/stat big and small files.
+
+#### Synopsis
+
+```
+juicefs bench [command options] PATH
+```
+
+#### Options
+
+`--block-size value`
+block size in MiB (default: 1)
+
+`--big-file-size value`
+size of big file in MiB (default: 1024)
+
+`--small-file-size value`
+size of small file in MiB (default: 0.1)
+
+`--small-file-count value`
+number of small files (default: 100)
+
+`--threads value, -p value`
+number of concurrent threads (default: 1)
+
+### juicefs gc
+
+#### Description
+
+Collect any leaked objects.
+
+#### Synopsis
+
+```
+juicefs gc [command options] META-URL
+```
+
+#### Options
+
+`--delete`
+deleted leaked objects (default: false)
+
+`--compact`
+compact all chunks with more than 1 slices (default: false).
+
+`--threads value`
+number threads to delete leaked objects (default: 10)
+
+### juicefs fsck
+
+#### Description
+
+Check consistency of file system.
+
+#### Synopsis
+
+```
+juicefs fsck [command options] META-URL
+```
+
+### juicefs profile
+
+#### Description
+
+Analyze [access log](../administration/fault_diagnosis_and_analysis.md#access-log).
+
+#### Synopsis
+
+```
+juicefs profile [command options] MOUNTPOINT/LOGFILE
+```
+
+#### Options
+
+`--uid value, -u value`
+track only specified UIDs(separated by comma ,)
+
+`--gid value, -g value`
+track only specified GIDs(separated by comma ,)
+
+`--pid value, -p value`
+track only specified PIDs(separated by comma ,)
+
+`--interval value`
+flush interval in seconds; set it to 0 when replaying a log file to get an immediate result (default: 2)
+
+### juicefs stats
+
+#### Description
+
+Show runtime statistics
+
+#### Synopsis
+
+```
+juicefs stats [command options] MOUNTPOINT
+```
+
+#### Options
+
+`--schema value`
+schema string that controls the output sections (u: usage, f: fuse, m: meta, c: blockcache, o: object, g: go) (default: "ufmco")
+
+`--interval value`
+interval in seconds between each update (default: 1)
+
+`--verbosity value`
+verbosity level, 0 or 1 is enough for most cases (default: 0)
+
+`--nocolor`
+disable colors (default: false)
+
+### juicefs status
+
+#### Description
+
+Show status of JuiceFS
+
+#### Synopsis
+
+```
+juicefs status [command options] META-URL
+```
+
+#### Options
+
+`--session value, -s value`
+show detailed information (sustained inodes, locks) of the specified session (sid) (default: 0)
+
+### juicefs warmup
+
+#### Description
+
+Build cache for target directories/files
+
+#### Synopsis
+
+```
+juicefs warmup [command options] [PATH ...]
+```
+
+#### Options
+
+`--file value, -f value`
+file containing a list of paths
+
+`--threads value, -p value`
+number of concurrent workers (default: 50)
+
+`--background, -b`
+run in background (default: false)
+
+### juicefs dump
+
+#### Description
+
+Dump metadata into a JSON file
+
+#### Synopsis
+
+```
+juicefs dump [command options] META-URL [FILE]
+```
+
+When the FILE is not provided, STDOUT will be used instead.
+
+#### Options
+
+`--subdir value`
+only dump a sub-directory.
+
+### juicefs load
+
+#### Description
+
+Load metadata from a previously dumped JSON file
+
+#### Synopsis
+
+```
+juicefs load [command options] META-URL [FILE]
+```
+
+When the FILE is not provided, STDIN will be used instead.
+
+### juicefs config
+
+#### Description
+
+Change config of a volume
+
+#### Synopsis
+
+```
+juicefs config [command options] META-URL
+```
+
+#### Options
+
+`--capacity value`
+the limit for space in GiB
+
+`--inodes value`
+the limit for number of inodes
+
+`--bucket value`
+a bucket URL to store data
+
+`--access-key value`
+access key for object storage
+
+`--secret-key value`
+secret key for object storage
+
+`--trash-days value`
+number of days after which removed files will be permanently deleted
+
+`--force`
+skip sanity check and force update the configurations (default: false)
+
+### juicefs destroy
+
+#### Description
+
+Destroy an existing volume
+
+#### Synopsis
+
+```
+juicefs destroy [command options] META-URL UUID
+```
+
+#### Options
+
+`--force`
+skip sanity check and force destroy the volume (default: false)
diff --git a/docs/en/reference/fuse_mount_options.md b/docs/en/reference/fuse_mount_options.md
new file mode 100644
index 0000000..a6fc2b5
--- /dev/null
+++ b/docs/en/reference/fuse_mount_options.md
@@ -0,0 +1,26 @@
+---
+sidebar_label: FUSE Mount Options
+sidebar_position: 6
+slug: /fuse_mount_options
+---
+# FUSE Mount Options
+
+This is a guide that lists important FUSE mount options. These mount options are specified by `-o` option when execute [`juicefs mount`](../reference/command_reference.md#juicefs-mount) command (use comma to separate multiple options). For example:
+
+```bash
+$ juicefs mount -d -o allow_other,writeback_cache localhost ~/jfs
+```
+
+## debug
+
+Enable debug log
+
+## allow_other
+
+This option overrides the security measure restricting file access to the user mounting the file system. So all users (including root) can access the files. This option is by default only allowed to root, but this restriction can be removed with `user_allow_other` configuration option in `/etc/fuse.conf`.
+
+## writeback_cache
+
+> **Note**: This mount option requires at least version 3.15 Linux kernel.
+
+FUSE supports ["writeback-cache mode"](https://www.kernel.org/doc/Documentation/filesystems/fuse-io.txt), which means the `write()` syscall can often complete very fast. It's recommended enable this mount option when write very small data (e.g. 100 bytes) frequently.
diff --git a/docs/en/reference/glossary.md b/docs/en/reference/glossary.md
new file mode 100644
index 0000000..740a55e
--- /dev/null
+++ b/docs/en/reference/glossary.md
@@ -0,0 +1,4 @@
+# Glossary
+
+:::note
+Working in progress.
\ No newline at end of file
diff --git a/docs/en/reference/how_juicefs_store_files.md b/docs/en/reference/how_juicefs_store_files.md
new file mode 100644
index 0000000..9ad3b17
--- /dev/null
+++ b/docs/en/reference/how_juicefs_store_files.md
@@ -0,0 +1,20 @@
+---
+sidebar_label: How JuiceFS Stores Files
+sidebar_position: 5
+slug: /how_juicefs_store_files
+---
+# How JuiceFS Stores Files
+
+The file system acts as a medium for interaction between the user and the hard drive, which allows files to be stored on the hard drive properly. As you know, Windows commonly used file systems are FAT32, NTFS, Linux commonly used file systems are Ext4, XFS, Btrfs, etc., each file system has its own unique way of organizing and managing files, which determines the file system Features such as storage capacity and performance.
+
+As a file system, JuiceFS is no exception. Its strong consistency and high performance are inseparable from its unique file management mode.
+
+Unlike the traditional file system that can only use local disks to store data and corresponding metadata, JuiceFS will format the data and store it in object storage (cloud storage), and store the metadata corresponding to the data in databases such as Redis. .
+
+Any file stored in JuiceFS will be split into fixed-size **"Chunk"**, and the default upper limit is 64 MiB. Each Chunk is composed of one or more **"Slice"**. The length of the slice is not fixed, depending on the way the file is written. Each slice will be further split into fixed-size **"Block"**, which is 4 MiB by default. Finally, these blocks will be stored in the object storage. At the same time, JuiceFS will store each file and its Chunks, Slices, Blocks and other metadata information in metadata engines.
+
+![](../images/juicefs-storage-format-new.png)
+
+Using JuiceFS, files will eventually be split into Chunks, Slices and Blocks and stored in object storage. Therefore, you will find that the source files stored in JuiceFS cannot be found in the file browser of the object storage platform. There is a chunks directory and a bunch of digitally numbered directories and files in the bucket. Don't panic, this is the secret of the high-performance operation of the JuiceFS file system!
+
+![How JuiceFS stores your files](../images/how-juicefs-stores-files-new.png)
diff --git a/docs/en/reference/how_to_setup_metadata_engine.md b/docs/en/reference/how_to_setup_metadata_engine.md
new file mode 100644
index 0000000..b75c46f
--- /dev/null
+++ b/docs/en/reference/how_to_setup_metadata_engine.md
@@ -0,0 +1,244 @@
+---
+sidebar_label: How to Setup Metadata Engine
+sidebar_position: 3
+slug: /databases_for_metadata
+---
+# How to Setup Metadata Engine
+
+By reading [JuiceFS Technical Architecture](../introduction/architecture.md) and [How JuiceFS Store Files](../reference/how_juicefs_store_files.md), you will understand that JuiceFS is designed to store data and metadata independently. Generally , the data is stored in the cloud storage based on object storage, and the metadata corresponding to the data is stored in an independent database.
+
+## Metadata Storage Engine
+
+Metadata and data are equally important. The metadata records the detailed information of each file, such as the name, size, permissions, location, and so on. Especially for this kind of file system where data and metadata are stored separately, the read and write performance of metadata directly determines the actual performance of the file system.
+
+The metadata storage of JuiceFS uses a multi-engine design. In order to create an ultra-high-performance cloud-native file system, JuiceFS first supports [Redis](https://redis.io) a key-value database running in memory, which makes JuiceFS ten times more powerful than Amazon [ EFS](https://aws.amazon.com/efs) and [S3FS](https://github.com/s3fs-fuse/s3fs-fuse) performance, [View test results](../benchmark/benchmark.md) .
+
+Through active interaction with community users, we found that many application scenarios do not absolutely rely on high performance. Sometimes users just want to temporarily find a convenient tool to reliably migrate data on the cloud, or simply want to mount the object storage locally for a Small-scale use. Therefore, JuiceFS has successively opened up support for more databases such as MySQL/MariaDB and SQLite (some performance comparison are recorded [here](../benchmark/metadata_engines_benchmark.md)).
+
+**But you need to pay special attention**, in the process of using the JuiceFS file system, no matter which database you choose to store metadata, please **make sure to ensure the security of the metadata**! Once the metadata is damaged or lost, it will directly cause the corresponding data to be completely damaged or lost, and in serious cases may directly cause the entire file system to be damaged.
+
+:::caution
+No matter which database is used to store metadata, **it is important to ensure the security of metadata**. If metadata is corrupted or lost, the corresponding data will be completely corrupted or lost, or even the whole file system will be damaged. For production environments, you should always choose a database with high availability, and at the same time, it is recommended to periodically "[backup metadata](../administration/metadata_dump_load.md)" on a regular basis.
+:::
+
+## Redis
+
+[Redis](https://redis.io/) is an open source (BSD license) memory-based key-value storage system, often used as a database, cache, and message broker.
+
+### Create a file system
+
+When using Redis as the metadata storage engine, the following format is usually used to access the database:
+
+```shell
+redis://username:password@host:6379/1
+```
+
+`username` was introduced after Redis 6.0. If you don't have a username field you can ignore it, e.g. `redis://:password@host:6379/1` (the `:` colon in front of the password needs to be preserved).
+
+For example, the following command will create a JuiceFS file system named `pics`, using the database No. `1` in Redis to store metadata:
+
+```shell
+$ juicefs format --storage s3 \
+ ...
+ "redis://:mypassword@192.168.1.6:6379/1" \
+ pics
+```
+
+For security purposes, it is recommended to pass the password using the environment variable `REDIS_PASSWORD`, e.g.
+
+```shell
+export REDIS_PASSWORD=mypassword
+```
+
+Then there is no need to set a password in the metadata URL.
+
+```shell
+$ juicefs format --storage s3 \
+ ...
+ "redis://192.168.1.6:6379/1" \
+ pics
+```
+
+:::caution
+JuiceFS requires at least 4.0 version for redis
+:::
+
+### Mount a file system
+
+```shell
+sudo juicefs mount -d redis://192.168.1.6:6379/1 /mnt/jfs
+```
+
+:::tip
+If you need to share the same file system on multiple servers, you must ensure that each server has access to the database where the metadata is stored.
+:::
+
+If you maintain your own Redis database, we recommend reading [Redis Best Practices](../administration/metadata/redis_best_practices.md).
+
+## PostgreSQL
+
+[PostgreSQL](https://www.postgresql.org/) is a powerful open source relational database with a perfect ecosystem and rich application scenarios, and it is well suited as the metadata engine of JuiceFS.
+
+Many cloud computing platforms offer hosted PostgreSQL database services, or you can deploy one yourself by following the [Usage Wizard](https://www.postgresqltutorial.com/postgresql-getting-started/).
+
+Other PostgreSQL-compatible databases (such as CockroachDB) can also be used as metadata engine.
+
+### Create a file system
+
+When using PostgreSQL as the metadata storage engine, the following format is usually used to access the database:
+
+```shell
+postgres://[:@][:5432]/[?parameters]
+```
+
+For example:
+
+```shell
+$ juicefs format --storage s3 \
+ ...
+ "postgres://user:password@192.168.1.6:5432/juicefs" \
+ pics
+```
+
+For security purposes, it is recommended to pass the password using an environment variable, e.g.
+
+```shell
+export $PG_PASSWD=mypassword
+```
+
+Then change the metadata URL to `"postgres://user:$PG_PASSWD@192.168.1.6:5432/juicefs"`
+
+### Mount a file system
+
+```shell
+sudo juicefs mount -d "postgres://user:$PG_PASSWD@192.168.1.6:5432/juicefs" /mnt/jfs
+```
+
+### Troubleshooting
+
+The JuiceFS client connects to PostgreSQL via SSL encryption by default; if an error `pq: SSL is not enabled on the server` is returned, it means that SSL is not enabled on the database; you can enable SSL encryption for PostgreSQL according to your business scenario, or you can disable it by adding a parameter to the metadata URL Validation.
+
+```shell
+$ juicefs format --storage s3 \
+ ...
+ "postgres://user:$PG_PASSWD@192.168.1.6:5432/juicefs?sslmode=disable" \
+ pics
+```
+
+Additional parameters can be appended to the metadata URL, [click here to view](https://pkg.go.dev/github.com/lib/pq#hdr-Connection_String_Parameters).
+
+## MySQL
+
+[MySQL](https://www.mysql.com/) is one of the most popular open source relational databases, and is often used as the preferred database for Web applications.
+
+### Create a file system
+
+When using MySQL as the metadata storage engine, the following format is usually used to access the database:
+
+```shell
+mysql://:@(:3306)/
+```
+
+For example:
+
+```shell
+$ juicefs format --storage s3 \
+ ...
+ "mysql://user:password@(192.168.1.6:3306)/juicefs" \
+ pics
+```
+
+For security purposes, it is recommended to pass the password using an environment variable, e.g.
+
+```shell
+export $MYSQL_PASSWD=mypassword
+```
+
+Then change the metadata URL to `"mysql://user:$MYSQL_PASSWD@(192.168.1.6:3306)/juicefs"`
+
+### Mount a file system
+
+```shell
+sudo juicefs mount -d "mysql://user:$MYSQL_PASSWD@(192.168.1.6:3306)/juicefs" /mnt/jfs
+```
+
+For more examples of MySQL database address format, [click here to view](https://github.com/Go-SQL-Driver/MySQL/#examples).
+
+## MariaDB
+
+[MariaDB](https://mariadb.org) is an open source branch of MySQL, maintained by the original developers of MySQL and kept open source.
+
+Because MariaDB is highly compatible with MySQL, there is no difference in usage, the parameters and settings are exactly the same.
+
+For example:
+
+```shell
+$ juicefs format --storage s3 \
+ ...
+ "mysql://user:$MYSQL_PASSWD@(192.168.1.6:3306)/juicefs" \
+ pics
+```
+
+## SQLite
+
+[SQLite](https://sqlite.org) is a widely used small, fast, single-file, reliable, and full-featured SQL database engine.
+
+The SQLite database has only one file, which is very flexible to create and use. When using it as the JuiceFS metadata storage engine, there is no need to create a database file in advance, and you can directly create a file system:
+
+```shell
+$ juicefs format --storage s3 \
+ ...
+ "sqlite3://my-jfs.db" \
+ pics
+```
+
+Executing the above command will automatically create a database file named `my-jfs.db` in the current directory, **please take good care of this file**!
+
+Mount the file system:
+
+```shell
+sudo juicefs mount -d "sqlite3://my-jfs.db"
+```
+
+Please note the location of the database file, if it is not in the current directory, you need to specify the absolute path to the database file, e.g.
+
+```shell
+sudo juicefs mount -d "sqlite3:///home/herald/my-jfs.db" /mnt/jfs/
+```
+
+:::note
+Since SQLite is a single-file database, usually only the host where the database is located can access it. Therefore, SQLite database is more suitable for stand-alone use. For multiple servers sharing the same file system, it is recommended to use databases such as Redis or MySQL.
+:::
+
+## TiKV
+
+[TiKV](https://github.com/tikv/tikv) is a distributed transactional key-value database. It is originally developed by [PingCAP](https://pingcap.com) as the storage layer for their flagship product [TiDB](https://github.com/pingcap/tidb). Now TiKV is an independent open source project, and is also a granduated project of [CNCF](https://www.cncf.io/projects).
+
+With the help of official tool `TiUP`, you can easily build a local playground for testing; refer [here](https://tikv.org/docs/5.1/concepts/tikv-in-5-minutes/) for details. In production, usually at least three hosts are required to store three data replicas; refer to the [official document](https://tikv.org/docs/5.1/deploy/install/install/) for all steps.
+
+### Create a file system
+
+When using TiKV as the metadata storage engine, specify parameters as the following format:
+
+```shell
+tikv://[,...]/
+```
+
+The `prefix` is a user-defined string, which can be used to distinguish multiple file systems or applications when they share the same TiKV cluster. For example:
+
+```shell
+$ juicefs format --storage s3 \
+ ...
+ "tikv://192.168.1.6:2379,192.168.1.7:2379,192.168.1.8:2379/jfs" \
+ pics
+```
+
+### Mount a file system
+
+```shell
+sudo juicefs mount -d "tikv://192.168.1.6:6379,192.168.1.7:6379,192.168.1.8:6379/jfs" /mnt/jfs
+```
+
+## FoundationDB
+
+Coming soon...
diff --git a/docs/en/reference/how_to_setup_object_storage.md b/docs/en/reference/how_to_setup_object_storage.md
new file mode 100644
index 0000000..054264e
--- /dev/null
+++ b/docs/en/reference/how_to_setup_object_storage.md
@@ -0,0 +1,780 @@
+---
+sidebar_label: How to Setup Object Storage
+sidebar_position: 4
+slug: /how_to_setup_object_storage
+---
+
+# How to Setup Object Storage
+
+As you can learn from [JuiceFS Technical Architecture](../introduction/architecture.md), JuiceFS is a distributed file system with data and metadata separation, using object storage as the main data storage and Redis, PostgreSQL, MySQL and other databases as metadata storage.
+
+## Storage options
+
+When creating a JuiceFS file system, setting up the storage generally involves the following options:
+
+- `--storage`: Specify the type of storage to be used by the file system, e.g. `--storage s3`
+- `--bucket`: Specify the storage access address, e.g. `--bucket https://myjuicefs.s3.us-east-2.amazonaws.com`
+- `--access-key` and `--secret-key`: Specify the authentication information when accessing the storage
+
+For example, the following command uses Amazon S3 object storage to create a file system:
+
+```shell
+$ juicefs format --storage s3 \
+ --bucket https://myjuicefs.s3.us-east-2.amazonaws.com \
+ --access-key abcdefghijklmn \
+ --secret-key nmlkjihgfedAcBdEfg \
+ redis://192.168.1.6/1 \
+ myjfs
+```
+
+## Access Key and Secret Key
+
+In general, object storages are authenticated by `Access Key ID` and `Access Key Secret`, which correspond to the `--access-key` and `--secret-key` options (or AK, SK for short) on the JuiceFS file system.
+
+In addition to explicitly specifying the `--access-key` and `--secret-key` options when creating a filesystem, it is more secure to pass key information via the `ACCESS_KEY` and `SECRET_KEY` environment variables, e.g.
+
+```shell
+$ export ACCESS_KEY=abcdefghijklmn
+$ export SECRET_KEY=nmlkjihgfedAcBdEfg
+$ juicefs format --storage s3 \
+ --bucket https://myjuicefs.s3.us-east-2.amazonaws.com \
+ redis://192.168.1.6/1 \
+ myjfs
+```
+
+Public clouds typically allow users to create IAM (Identity and Access Management) roles, such as [AWS IAM role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) or [Alibaba Cloud RAM role](https://www.alibabacloud.com/help/doc-detail/110376.htm), which can be assigned to VM instances. If the cloud server instance already has read and write access to the object storage, there is no need to specify `--access-key` and `--secret-key`.
+
+## Using Proxy
+
+If the network environment where the client is located is affected by firewall policies or other factors that require access to external object storage services through a proxy, the corresponding proxy settings are different for different operating systems, please refer to the corresponding user manual for settings.
+
+On Linux, for example, the proxy can be set by creating `http_proxy` and `https_proxy` environment variables.
+
+```shell
+$ export http_proxy=http://localhost:8035/
+$ export https_proxy=http://localhost:8035/
+$ juicefs format \
+ --storage s3 \
+ ... \
+ myjfs
+```
+
+## Supported Object Storage
+
+If you wish to use a storage type that is not listed, feel free to submit a requirement [issue](https://github.com/juicedata/juicefs/issues).
+
+| Name | Value |
+| --------------------------------------------------------- | ---------- |
+| [Amazon S3](#amazon-s3) | `s3` |
+| [Google Cloud Storage](#google-cloud-storage) | `gs` |
+| [Azure Blob Storage](#azure-blob-storage) | `wasb` |
+| [Backblaze B2](#backblaze-b2) | `b2` |
+| [IBM Cloud Object Storage](#ibm-cloud-object-storage) | `ibmcos` |
+| [Scaleway Object Storage](#scaleway-object-storage) | `scw` |
+| [DigitalOcean Spaces](#digitalocean-spaces) | `space` |
+| [Wasabi](#wasabi) | `wasabi` |
+| [Storj DCS](#storj-dcs) | `s3` |
+| [Vultr Object Storage](#vultr-object-storage) | `s3` |
+| [Alibaba Cloud OSS](#alibaba-cloud-oss) | `oss` |
+| [Tencent Cloud COS](#tencent-cloud-cos) | `cos` |
+| [Huawei Cloud OBS](#huawei-cloud-obs) | `obs` |
+| [Baidu Object Storage](#baidu-object-storage) | `bos` |
+| [Kingsoft KS3](#kingsoft-ks3) | `ks3` |
+| [NetEase Object Storage](#netease-object-storage) | `nos` |
+| [QingStor](#qingstor) | `qingstor` |
+| [Qiniu Object Storage](#qiniu-object-storage) | `qiniu` |
+| [Sina Cloud Storage](#sina-cloud-storage) | `scs` |
+| [CTYun OOS](#ctyun-oos) | `oos` |
+| [ECloud Object Storage](#ecloud-object-storage) | `eos` |
+| [UCloud US3](#ucloud-us3) | `ufile` |
+| [Ceph RADOS](#ceph-rados) | `ceph` |
+| [Ceph RGW](#ceph-rgw) | `s3` |
+| [Swift](#swift) | `swift` |
+| [MinIO](#minio) | `minio` |
+| [WebDAV](#webdav) | `webdav` |
+| [HDFS](#hdfs) | `hdfs` |
+| [Redis](#redis) | `redis` |
+| [TiKV](#tikv) | `tikv` |
+| [Local disk](#local-disk) | `file` |
+
+## Amazon S3
+
+S3 supports [two style endpoint URI](https://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html): virtual hosted-style and path-style. The difference between them is:
+
+- Virtual hosted-style: `https://.s3..amazonaws.com`
+- Path-style: `https://s3..amazonaws.com/`
+
+The `` should be replaced with specific region code, e.g. the region code of US East (N. Virginia) is `us-east-1`. You could find all available regions at [here](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-available-regions).
+
+:::note
+For AWS China user, you need add `.cn` to the host, i.e. `amazonaws.com.cn`. And check [this document](https://docs.amazonaws.cn/en_us/aws/latest/userguide/endpoints-arns.html) to know your region code.
+:::
+
+:::note
+If the S3 bucket has public access (anonymous access is supported), please set `--access-key` to `anonymous`.
+:::
+
+Versions prior to JuiceFS v0.12 only supported the virtual hosting type, v0.12 and later versions support both styles. Example.
+
+```bash
+# virtual hosted-style
+$ juicefs format \
+ --storage s3 \
+ --bucket https://.s3..amazonaws.com \
+ ... \
+ myjfs
+```
+
+```bash
+# path-style
+$ juicefs format \
+ --storage s3 \
+ --bucket https://s3..amazonaws.com/ \
+ ... \
+ myjfs
+```
+
+You can also set `--storage` to `s3` to connect to S3-compatible object storage, e.g.
+
+```bash
+# virtual hosted-style
+$ juicefs format \
+ --storage s3 \
+ --bucket https://. \
+ ... \
+ myjfs
+```
+
+```bash
+# path-style
+$ juicefs format \
+ --storage s3 \
+ --bucket https:/// \
+ ... \
+ myjfs
+```
+
+:::tip
+The format of `--bucket` option for all S3 compatible object storage services is `https://.` or `https:///`. The default `region` is `us-east-1`. When a different `region` is required, it can be set manually via the environment variable `AWS_REGION` or `AWS_DEFAULT_REGION`.
+:::
+
+## Google Cloud Storage
+
+Google Cloud uses [IAM](https://cloud.google.com/iam/docs/overview) to manage the access rights of resources, and through the authorization of [service accounts](https://cloud.google.com/iam/docs/creating-managing- service-accounts#iam-service-accounts-create-gcloud) authorization, you can fine-grained control the access rights of cloud servers and object storage.
+
+For cloud servers and object storage that belong to the same service account, as long as the account grants access to the relevant resources, there is no need to provide authentication information when creating a JuiceFS file system, and the cloud platform will automatically complete authentication.
+
+For cases where you want to access the object storage from outside the Google Cloud Platform, for example to create a JuiceFS file system on your local computer using Google Cloud Storage, you need to configure authentication information. Since Google Cloud Storage does not use `Access Key ID` and `Access Key Secret`, but rather the `JSON key file` of the service account to authenticate the identity.
+
+Please refer to "[Authentication as a service account](https://cloud.google.com/docs/authentication/production)" to create `JSON key file` for the service account and download it to the local computer, and define the path to the key file via `GOOGLE_APPLICATION_ CREDENTIALS` environment variable to define the path to the key file, e.g.
+
+```shell
+export GOOGLE_APPLICATION_CREDENTIALS="$HOME/service-account-file.json"
+```
+
+You can write the command to create environment variables to `~/.bashrc` or `~/.profile` and have the shell set it automatically every time you start.
+
+Once you have configured the environment variables for passing key information, the commands to create a file system locally and on Google Cloud Server are identical. For example.
+
+```bash
+$ juicefs format \
+ --storage gs \
+ --bucket \
+ ... \
+ myjfs
+```
+
+As you can see, there is no need to include authentication information in the command, and the client will authenticate the access to the object storage through the JSON key file set in the previous environment variable. Also, since the bucket name is [globally unique](https://cloud.google.com/storage/docs/naming-buckets#considerations), when creating a file system, the `--bucket` option only needs to specify the bucket name.
+
+## Azure Blob Storage
+
+Besides provide authorization information through `--access-key` and `--secret-key` options, you could also create a [connection string](https://docs.microsoft.com/en-us/azure/storage/common/storage-configure-connection-string) and set `AZURE_STORAGE_CONNECTION_STRING` environment variable. For example:
+
+```bash
+# Use connection string
+$ export AZURE_STORAGE_CONNECTION_STRING="DefaultEndpointsProtocol=https;AccountName=XXX;AccountKey=XXX;EndpointSuffix=core.windows.net"
+$ juicefs format \
+ --storage wasb \
+ --bucket https:// \
+ ... \
+ myjfs
+```
+
+:::note
+For Azure China user, the value of `EndpointSuffix` is `core.chinacloudapi.cn`.
+:::
+
+## Backblaze B2
+
+To use Backblaze B2 as a data storage for JuiceFS, you need to create [application key](https://www.backblaze.com/b2/docs/application_keys.html) first, **Application Key ID** and ** Application Key** corresponds to `Access key` and `Secret key` respectively.
+
+Backblaze B2 supports two access interfaces: the B2 native API and the S3-compatible API.
+
+### B2 native API
+
+The storage type should be set to `b2` and `--bucket` should only set the bucket name. For example:
+
+```bash
+$ juicefs format \
+ --storage b2 \
+ --bucket \
+ --access-key \
+ --secret-key \
+ ... \
+ myjfs
+```
+
+### S3-compatible API
+
+The storage type should be set to `s3` and `--bucket` should specify the full bucket address. For example:
+
+```bash
+$ juicefs format \
+ --storage s3 \
+ --bucket https://s3.eu-central-003.backblazeb2.com/ \
+ --access-key \
+ --secret-key \
+ ... \
+ myjfs
+```
+
+## IBM Cloud Object Storage
+
+You need first creating [API key](https://cloud.ibm.com/docs/account?topic=account-manapikey) and retrieving [instance ID](https://cloud.ibm.com/docs/key-protect?topic=key-protect-retrieve-instance-ID). The "API key" and "instance ID" are the equivalent of access key and secret key respectively.
+
+IBM Cloud Object Storage provides [multiple endpoints](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-endpoints) for each region, depends on your network (e.g. public or private network), you should use appropriate endpoint. For example:
+
+```bash
+$ juicefs format \
+ --storage ibmcos \
+ --bucket https://. \
+ --access-key \
+ --secret-key \
+ ... \
+ myjfs
+```
+
+## Scaleway Object Storage
+
+Please follow [this document](https://www.scaleway.com/en/docs/generate-api-keys) to learn how to get access key and secret key.
+
+The `--bucket` option format is `https://.s3..scw.cloud`, replace `` with specific region code, e.g. the region code of "Amsterdam, The Netherlands" is `nl-ams`. You could find all available regions at [here](https://www.scaleway.com/en/docs/object-storage-feature/#-Core-Concepts). For example:
+
+```bash
+$ juicefs format \
+ --storage scw \
+ --bucket https://.s3..scw.cloud \
+ ... \
+ myjfs
+```
+
+## DigitalOcean Spaces
+
+Please follow [this document](https://www.digitalocean.com/community/tutorials/how-to-create-a-digitalocean-space-and-api-key) to learn how to get access key and secret key.
+
+The `--bucket` option format is `https://..digitaloceanspaces.com`, replace `` with specific region code, e.g. `nyc3`. You could find all available regions at [here](https://www.digitalocean.com/docs/spaces/#regional-availability). For example:
+
+```bash
+$ juicefs format \
+ --storage space \
+ --bucket https://..digitaloceanspaces.com \
+ ... \
+ myjfs
+```
+
+## Wasabi
+
+Please follow [this document](https://wasabi-support.zendesk.com/hc/en-us/articles/360019677192-Creating-a-Root-Access-Key-and-Secret-Key) to learn how to get access key and secret key.
+
+The `--bucket` option format is `https://.s3..wasabisys.com`, replace `` with specific region code, e.g. the region code of US East 1 (N. Virginia) is `us-east-1`. You could find all available regions at [here](https://wasabi-support.zendesk.com/hc/en-us/articles/360.15.26031-What-are-the-service-URLs-for-Wasabi-s-different-regions-). For example:
+
+```bash
+$ juicefs format \
+ --storage wasabi \
+ --bucket https://.s3..wasabisys.com \
+ ... \
+ myjfs
+```
+
+:::note
+For Tokyo (ap-northeast-1) region user, see [this document](https://wasabi-support.zendesk.com/hc/en-us/articles/360039372392-How-do-I-access-the-Wasabi-Tokyo-ap-northeast-1-storage-region-) to learn how to get appropriate endpoint URI.***
+:::
+
+## Storj DCS
+
+Please refer to [this document](https://docs.storj.io/api-reference/s3-compatible-gateway) to learn how to create access key and secret key.
+
+Storj DCS is an S3-compatible storage, just use `s3` for `--storage` option. The setting format of the `--bucket` option is `https://gateway..storjshare.io/`, please replace `` with the storage region you actually use. There are currently three avaliable regions: `us1`, `ap1` and `eu1`. For example:
+
+```shell
+$ juicefs format \
+ --storage s3 \
+ --bucket https://gateway..storjshare.io/ \
+ --access-key \
+ --secret-key \
+ ... \
+ myjfs
+```
+
+## Vultr Object Storage
+
+Vultr Object Storage is an S3-compatible storage, use `s3` for `--storage` option. The `--bucket` option is `https://..vultrobjects.com/`. For example:
+
+```shell
+$ juicefs format \
+ --storage s3 \
+ --bucket https://.ewr1.vultrobjects.com/ \
+ --access-key \
+ --secret-key \
+ ... \
+ myjfs
+```
+
+Please find the access and secret keys for object storage [in the customer portal](https://my.vultr.com/objectstorage/).
+
+## Alibaba Cloud OSS
+
+Please follow [this document](https://www.alibabacloud.com/help/doc-detail/125558.htm) to learn how to get access key and secret key. And if you already created [RAM role](https://www.alibabacloud.com/help/doc-detail/110376.htm) and assign it to VM instance, you could omit `--access-key` and `--secret-key` options.
+
+Alibaba Cloud also supports use [Security Token Service (STS)](https://www.alibabacloud.com/help/doc-detail/100624.htm) to authorize temporary access to OSS. If you wanna use STS, you should omit `--access-key` and `--secret-key` options and set `ALICLOUD_ACCESS_KEY_ID`, `ALICLOUD_ACCESS_KEY_SECRET`, `SECURITY_TOKEN` environment variables instead, for example:
+
+```bash
+# Use Security Token Service (STS)
+$ export ALICLOUD_ACCESS_KEY_ID=XXX
+$ export ALICLOUD_ACCESS_KEY_SECRET=XXX
+$ export SECURITY_TOKEN=XXX
+$ juicefs format \
+ --storage oss \
+ --bucket https://. \
+ ... \
+ myjfs
+```
+
+OSS provides [multiple endpoints](https://www.alibabacloud.com/help/doc-detail/31834.htm) for each region, depends on your network (e.g. public or internal network), you should use appropriate endpoint.
+
+If you are creating a filesystem on AliCloud's server, you can specify the bucket name directly in the `--bucket` option. For example.
+
+```bash
+# Running within Alibaba Cloud
+$ juicefs format \
+ --storage oss \
+ --bucket \
+ ... \
+ myjfs
+```
+
+## Tencent Cloud COS
+
+The naming rule of bucket in Tencent Cloud is `-`, so you must append `APPID` to the bucket name. Please follow [this document](https://intl.cloud.tencent.com/document/product/436/13312) to learn how to get `APPID`.
+
+The full format of `--bucket` option is `https://-.cos..myqcloud.com`, replace `` with specific region code, e.g. the region code of Shanghai is `ap-shanghai`. You could find all available regions at [here](https://intl.cloud.tencent.com/document/product/436/6224). For example:
+
+```bash
+$ juicefs format \
+ --storage cos \
+ --bucket https://-.cos..myqcloud.com \
+ ... \
+ myjfs
+```
+
+If you are creating a file system on Tencent Cloud's server, you can specify the bucket name directly in the `--bucket` option. For example.
+
+```bash
+# Running within Tencent Cloud
+$ juicefs format \
+ --storage cos \
+ --bucket - \
+ ... \
+ myjfs
+```
+
+## Huawei Cloud OBS
+
+Please follow [this document](https://support.huaweicloud.com/usermanual-ca/zh-cn_topic_0046606340.html) to learn how to get access key and secret key.
+
+The `--bucket` option format is `https://.obs..myhuaweicloud.com`, replace `` with specific region code, e.g. the region code of Beijing 1 is `cn-north-1`. You could find all available regions at [here](https://developer.huaweicloud.com/endpoint?OBS). For example:
+
+```bash
+$ juicefs format \
+ --storage obs \
+ --bucket https://.obs..myhuaweicloud.com \
+ ... \
+ myjfs
+```
+
+If you create the file system on Huawei Cloud's server, you can specify the bucket name directly in `--bucket`. For example.
+
+```bash
+# Running within Huawei Cloud
+$ juicefs format \
+ --storage obs \
+ --bucket \
+ ... \
+ myjfs
+```
+
+## Baidu Object Storage
+
+Please follow [this document](https://cloud.baidu.com/doc/Reference/s/9jwvz2egb) to learn how to get access key and secret key.
+
+The `--bucket` option format is `https://..bcebos.com`, replace `` with specific region code, e.g. the region code of Beijing is `bj`. You could find all available regions at [here](https://cloud.baidu.com/doc/BOS/s/Ck1rk80hn#%E8%AE%BF%E9%97%AE%E5%9F%9F%E5%90%8D%EF%BC%88endpoint%EF%BC%89). For example:
+
+```bash
+$ juicefs format \
+ --storage bos \
+ --bucket https://..bcebos.com \
+ ... \
+ myjfs
+```
+
+If you are creating a file system on Baidu Cloud's server, you can specify the bucket name directly in `--bucket`. For example.
+
+```bash
+# Running within Baidu Cloud
+$ juicefs format \
+ --storage bos \
+ --bucket \
+ ... \
+ myjfs
+```
+
+## Kingsoft Cloud KS3
+
+Please follow [this document](https://docs.ksyun.com/documents/1386) to learn how to get access key and secret key.
+
+KS3 provides [multiple endpoints](https://docs.ksyun.com/documents/6761) for each region, depends on your network (e.g. public or internal network), you should use appropriate endpoint. For example:
+
+```bash
+$ juicefs format \
+ --storage ks3 \
+ --bucket https://. \
+ ... \
+ myjfs
+```
+
+## Mtyun Storage Service
+
+Please follow [this document](https://www.mtyun.com/doc/api/mss/mss/fang-wen-kong-zhi) to learn how to get access key and secret key.
+
+The `--bucket` option format is `https://.`, replace `` with specific value, e.g. `mtmss.com`. You could find all available endpoints at [here](https://www.mtyun.com/doc/products/storage/mss/index#%E5%8F%AF%E7%94%A8%E5%8C%BA%E5%9F%9F). For example:
+
+```bash
+$ juicefs format \
+ --storage mss \
+ --bucket https://. \
+ ... \
+ myjfs
+```
+
+## NetEase Object Storage
+
+Please follow [this document](https://www.163yun.com/help/documents/55485278220111872) to learn how to get access key and secret key.
+
+NOS provides [multiple endpoints](https://www.163yun.com/help/documents/67078583131230208) for each region, depends on your network (e.g. public or internal network), you should use appropriate endpoint. For example:
+
+```bash
+$ juicefs format \
+ --storage nos \
+ --bucket https://. \
+ ... \
+ myjfs
+```
+
+## QingStor
+
+Please follow [this document](https://docsv3.qingcloud.com/storage/object-storage/api/practices/signature/#%E8%8E%B7%E5%8F%96-access-key) to learn how to get access key and secret key.
+
+The `--bucket` option format is `https://..qingstor.com`, replace `` with specific region code, e.g. the region code of Beijing 3-A is `pek3a`. You could find all available regions at [here](https://docs.qingcloud.com/qingstor/#%E5%8C%BA%E5%9F%9F%E5%8F%8A%E8%AE%BF%E9%97%AE%E5%9F%9F%E5%90%8D). For example:
+
+```bash
+$ juicefs format \
+ --storage qingstor \
+ --bucket https://..qingstor.com \
+ ... \
+ myjfs
+```
+
+:::note
+The format of `--bucket` option for all QingStor compatible object storage services is `http://.`.
+:::
+
+## Qiniu
+
+Please follow [this document](https://developer.qiniu.com/af/kb/1479/how-to-access-or-locate-the-access-key-and-secret-key) to learn how to get access key and secret key.
+
+The `--bucket` option format is `https://.s3-.qiniucs.com`, replace `` with specific region code, e.g. the region code of China East is `cn-east-1`. You could find all available regions at [here](https://developer.qiniu.com/kodo/4088/s3-access-domainname). For example:
+
+```bash
+$ juicefs format \
+ --storage qiniu \
+ --bucket https://.s3-.qiniucs.com \
+ ... \
+ myjfs
+```
+
+## Sina Cloud Storage
+
+Please follow [this document](https://scs.sinacloud.com/doc/scs/guide/quick_start#accesskey) to learn how to get access key and secret key.
+
+The `--bucket` option format is `https://.stor.sinaapp.com`. For example:
+
+```bash
+$ juicefs format \
+ --storage scs \
+ --bucket https://.stor.sinaapp.com \
+ ... \
+ myjfs
+```
+
+## CTYun OOS
+
+Please follow [this document](https://www.ctyun.cn/help2/10000101/10473683) to learn how to get access key and secret key.
+
+The `--bucket` option format is `https://.oss-.ctyunapi.cn`, replace `