Skip to content

Commit

Permalink
remove marigold JSON, mkchain paris
Browse files Browse the repository at this point in the history
  • Loading branch information
craigbuckler committed May 8, 2024
1 parent 5e25920 commit f1f76c5
Show file tree
Hide file tree
Showing 5 changed files with 130 additions and 130 deletions.
10 changes: 5 additions & 5 deletions charts/snapshotEngine/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Snapshot Engine

A Helm chart for creating Tezos snapshots and tarballs for faster node sync, all in kubernetes, and deploy them to a bucket with a static website.
A Helm chart for creating Tezos snapshots and tarballs for faster node sync, all in Kubernetes, and deploy them to a bucket with a static website.

Check out [xtz-shots.io](xtz-shots.io) for an example.

Expand Down Expand Up @@ -45,7 +45,7 @@ The Snapshot Engine is a Helm Chart to be deployed on a Kubernetes Cluster. It

## How To

1. Create an S3 Bucket.
1. Create an S3 Bucket.

:warning: If you want to make it available over the internet, you will need to make it a [Public Bucket](https://aws.amazon.com/premiumsupport/knowledge-center/read-access-objects-s3-bucket/) and with the following Bucket Policy.

Expand All @@ -72,9 +72,9 @@ The Snapshot Engine is a Helm Chart to be deployed on a Kubernetes Cluster. It

Replace `<ARN_OF_S3_BUCKET>` with the ARN of your new S3 Bucket.

:warning: Pay close attention to the seemlingly redundant final `Resource` area.
:warning: Pay close attention to the seemingly redundant final `Resource` area.

`/` and `/*` provide permission to the root and contents of the S3 Bucket respectively.
`/` and `/*` provide permission to the root and contents of the S3 Bucket respectively.

```json
{
Expand Down Expand Up @@ -225,7 +225,7 @@ We create only one snapshot at a time as having more than one in-progress slows

### Snapshot Scheduler Deployment

A Kubernetes Deployment called the **Snapshot Scheduler** runs indefinitely triggering a new Kubernetes Job called **Snapshot Maker**.
A Kubernetes Deployment called the **Snapshot Scheduler** runs indefinitely triggering a new Kubernetes Job called **Snapshot Maker**.

Snapshot Scheduler waits until the Snapshot Maker Job is gone to schedule a new job. This way there are snapshots constantly being created instead of running on a schedule.

Expand Down
14 changes: 7 additions & 7 deletions charts/tezos/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -316,7 +316,7 @@ serviceMonitor:
# https://tezos.gitlab.io/user/key-management.html#signer
octezSigners: {}
# These signers use the octez-signer binary.
#
#
# Example:
# ```
# octezSigners:
Expand Down Expand Up @@ -429,9 +429,9 @@ dalNodes: {}

# When spinning up nodes, tezos-k8s will attempt to download a snapshot from a
# known source. This should be a url to a json metadata file in the format
# xtz-shots uses. If you want to sync from scratch or for a private chain, set
# to `null`.
snapshot_source: https://snapshots.tezos.marigold.dev/api/tezos-snapshots.json
# xtz-shots uses. If you want to use a specific snapshot, sync from scratch, or
# create a private chain, set to `null`.
snapshot_source: null

# By default, tezos-k8s will download and unpack snapshots.
# A tarball is a LZ4-compressed filesystem tar of a node's data directory.
Expand All @@ -450,10 +450,10 @@ prefer_tarballs: false
# will be ignored for all artifact types.
## NOTE: `*_tarball_url` and `*_snapshot_url` are mutually exclusive
## and cannot both be specified at the same time.
archive_tarball_url: null # e.g. https://mainnet.xtz-shots.io/archive-tarball
full_snapshot_url: null
archive_tarball_url: null
full_snapshot_url: null # e.g. https://snapshots.eu.tzinit.org/mainnet/full
full_tarball_url: null
rolling_snapshot_url: null
rolling_snapshot_url: null # e.g. https://snapshots.eu.tzinit.org/mainnet/rolling
rolling_tarball_url: null

# List of peers for nodes to connect to. Gets set under config.json `p2p` field
Expand Down
4 changes: 2 additions & 2 deletions mkchain/tqchain/mkchain.py
Original file line number Diff line number Diff line change
Expand Up @@ -165,7 +165,7 @@ def main():
},
"protocols": [
{
"command": "Proxford",
"command": "PtParisB",
"vote": {"liquidity_baking_toggle_vote": "pass"},
}
],
Expand Down Expand Up @@ -312,7 +312,7 @@ def main():
parametersYaml = yaml.safe_load(yaml_file)
activation = {
"activation": {
"protocol_hash": "ProxfordYmVfjWnRcgjWH36fW6PArwqykTFzotUxRs6gmTcZDuH",
"protocol_hash": "PtParisBQscdCm6Cfow6ndeU6wKJyA3aV1j4D3gQBQMsTQyJCrz",
"protocol_parameters": parametersYaml,
},
}
Expand Down
76 changes: 38 additions & 38 deletions test/charts/mainnet.expect.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -29,12 +29,12 @@ data:
}
FULL_SNAPSHOT_URL: ""
FULL_TARBALL_URL: ""
ROLLING_SNAPSHOT_URL: ""
ROLLING_SNAPSHOT_URL: "https://snapshots.us.tzinit.org/mainnet/rolling"
ROLLING_TARBALL_URL: ""
ARCHIVE_TARBALL_URL: ""
PREFER_TARBALLS: "false"
SNAPSHOT_METADATA_NETWORK_NAME: ""
SNAPSHOT_SOURCE: "https://snapshots.tezos.marigold.dev/api/tezos-snapshots.json"
SNAPSHOT_SOURCE: ""
OCTEZ_VERSION: "tezos/tezos:v19.0"
NODE_GLOBALS: |
{
Expand Down Expand Up @@ -135,7 +135,7 @@ spec:
appType: octez-node
node_class: rolling-node
spec:
containers:
containers:
- name: octez-node
image: "tezos/tezos:v19.0"
imagePullPolicy: IfNotPresent
Expand All @@ -145,27 +145,27 @@ spec:
- "-c"
- |
#!/bin/sh
set -xe
# ensure we can run octez-client commands without specifying client dir
ln -s /var/tezos/client /home/tezos/.tezos-client
#
# Not every error is fatal on start.
# So, we try a few times with increasing delays:
for d in 1 1 5 10 20 60 120; do
/usr/local/bin/octez-node run \
--bootstrap-threshold 0 \
--config-file /etc/tezos/config.json
sleep $d
done
#
# Keep the container alive for troubleshooting on failures:
sleep 3600
envFrom:
env:
- name: DAEMON
Expand All @@ -185,7 +185,7 @@ spec:
readinessProbe:
httpGet:
path: /is_synced
port: 31732
port: 31732
- name: sidecar
image: "ghcr.io/tacoinfra/tezos-k8s-utils:main"
imagePullPolicy: IfNotPresent
Expand All @@ -194,7 +194,7 @@ spec:
envFrom:
- configMapRef:
name: tezos-config
env:
env:
- name: MY_POD_IP
valueFrom:
fieldRef:
Expand All @@ -218,8 +218,8 @@ spec:
limits:
memory: 100Mi
requests:
memory: 80Mi
initContainers:
memory: 80Mi
initContainers:
- name: config-init
image: "tezos/tezos:v19.0"
imagePullPolicy: IfNotPresent
Expand All @@ -229,31 +229,31 @@ spec:
- "-c"
- |
set -e
echo "Writing custom configuration for public node"
mkdir -p /etc/tezos/data
# if config already exists (container is rebooting), dump and delete it.
if [ -e /etc/tezos/data/config.json ]; then
printf "Found pre-existing config.json:\n"
cat /etc/tezos/data/config.json
printf "Deleting\n"
rm -rvf /etc/tezos/data/config.json
fi
/usr/local/bin/octez-node config init \
--config-file /etc/tezos/data/config.json \
--data-dir /etc/tezos/data \
--network $CHAIN_NAME
cat /etc/tezos/data/config.json
printf "\n\n\n\n\n\n\n"
envFrom:
- configMapRef:
name: tezos-config
env:
env:
- name: MY_POD_IP
valueFrom:
fieldRef:
Expand All @@ -272,7 +272,7 @@ spec:
- mountPath: /etc/tezos
name: config-volume
- mountPath: /var/tezos
name: var-volume
name: var-volume
- name: config-generator
image: "ghcr.io/tacoinfra/tezos-k8s-utils:main"
imagePullPolicy: IfNotPresent
Expand All @@ -281,7 +281,7 @@ spec:
envFrom:
- configMapRef:
name: tezos-config
env:
env:
- name: MY_POD_IP
valueFrom:
fieldRef:
Expand All @@ -302,7 +302,7 @@ spec:
- mountPath: /var/tezos
name: var-volume
- mountPath: /etc/secret-volume
name: tezos-accounts
name: tezos-accounts
- name: snapshot-downloader
image: "ghcr.io/tacoinfra/tezos-k8s-utils:main"
imagePullPolicy: IfNotPresent
Expand All @@ -311,7 +311,7 @@ spec:
envFrom:
- configMapRef:
name: tezos-config
env:
env:
- name: MY_POD_IP
valueFrom:
fieldRef:
Expand All @@ -330,7 +330,7 @@ spec:
- mountPath: /etc/tezos
name: config-volume
- mountPath: /var/tezos
name: var-volume
name: var-volume
- name: snapshot-importer
image: "tezos/tezos:v19.0"
imagePullPolicy: IfNotPresent
Expand All @@ -340,41 +340,41 @@ spec:
- "-c"
- |
set -e
bin_dir="/usr/local/bin"
data_dir="/var/tezos"
node_dir="$data_dir/node"
node_data_dir="$node_dir/data"
node="$bin_dir/octez-node"
snapshot_file=${node_dir}/chain.snapshot
if [ ! -f ${snapshot_file} ]; then
echo "No snapshot to import."
exit 0
fi
if [ -e ${node_data_dir}/context/store.dict ]; then
echo "Blockchain has already been imported. If a tarball"
echo "instead of a regular tezos snapshot was used, it was"
echo "imported in the snapshot-downloader container."
exit 0
fi
cp -v /etc/tezos/config.json ${node_data_dir}
if [ -f ${node_dir}/chain.snapshot.block_hash ]; then
block_hash_arg="--block $(cat ${node_dir}/chain.snapshot.block_hash)"
fi
${node} snapshot import ${snapshot_file} --data-dir ${node_data_dir} --no-check
find ${node_dir}
rm -rvf ${snapshot_file}
envFrom:
- configMapRef:
name: tezos-config
env:
env:
- name: MY_POD_IP
valueFrom:
fieldRef:
Expand All @@ -393,7 +393,7 @@ spec:
- mountPath: /etc/tezos
name: config-volume
- mountPath: /var/tezos
name: var-volume
name: var-volume
- name: upgrade-storage
image: "tezos/tezos:v19.0"
imagePullPolicy: IfNotPresent
Expand All @@ -403,14 +403,14 @@ spec:
- "-c"
- |
set -ex
if [ ! -e /var/tezos/node/data/context/store.dict ]
then
printf "No store in data dir found, probably initial start, doing nothing."
exit 0
fi
octez-node upgrade storage --config /etc/tezos/config.json
envFrom:
env:
- name: DAEMON
Expand All @@ -421,7 +421,7 @@ spec:
- mountPath: /var/tezos
name: var-volume
securityContext:
fsGroup: 1000
fsGroup: 1000
volumes:
- emptyDir: {}
name: config-volume
Expand Down
Loading

0 comments on commit f1f76c5

Please sign in to comment.