Skip to Content
ValidatorsMaintenance

Getting Logs

We use journald, which is systemd default log management service, to collect logs generated by the node and the engine. To interact with it, you’ll use journalctl. Below are some useful commands when debugging your validator node.

Getting all logs for unit

journalctl -u chainflip-node.service

This will output all logs generated by that unit. You hit space to scroll through the logs from older to newer.

Following logs

If you want to see logs in real time, run the following:

journalctl -f -u chainflip-node.service

Using time ranges

To filter your logs using time ranges you can run the following:

Relative time range

journalctl -u chainflip-node.service --since "1 hour ago"

Specific time ranges

journalctl -u chainflip-node.service --since "2023-06-20 23:15:00" --until "2023-06-20 23:25:00"

Setting Log Levels

Most of the time it is ok to just use the default log levels. However, these can be changed, either via environment variable, or dynamically.

Environment Variable Log Level Changes

The RUST_LOG environment variable controls the initial filtering directives if specified at engine startup.

For example:

export RUST_LOG="debug,warp=off,hyper=off,jsonrpc=off,web3=off,reqwest=off"

Dynamic Log Level Changes

Returns the current filtering directives:

curl -X GET 127.0.0.1:36079/tracing

Note: The port used by the engine to accept these queries can be configured in your Settings.toml file.

Sets the filter directives so the default is DEBUG, and the logging in modules warp, hyper, jsonrpc, web3, and reqwest is turned off:

curl --json '"debug,warp=off,hyper=off,jsonrpc=off,web3=off,reqwest=off"' 127.0.0.1:36079/tracing

Equivalent to the above, but without using the —json short-hand:

curl -X POST -H 'Content-Type: application/json' -d '"debug,warp=off,hyper=off,jsonrpc=off,web3=off,reqwest=off"' 127.0.0.1:36079/tracing

The syntax for specifying filtering directives is given here .

Migrating to a Different Server

One of your amazing community members has developed this guide. After you have created a new machine and finished all the steps up to and including Creating new linux users, follow this guide.

Do not perform migration steps during a rotation.

Backup Your Engine Database from the Old Server

The engine database contains the threshold signing keyshares of your validator. Thus, you should backup this database before you start the migration process, and ensure you keep it stored safely and securely.

/etc/chainflip/data.db is the default location. If you have changed the location, please adjust the path accordingly. The path the engine uses for it’s database is specified in the Engine Settings section.

# Optional # [signing] # db_file = "/etc/chainflip/data.db"
# Run these commands on your old server sudo systemctl stop chainflip-engine cp /etc/chainflip/data.db /etc/chainflip/data.db.backup # Restart the engine on your old server while setting up the new one sudo systemctl start chainflip-engine

Backup your Authorship Keys

/etc/chainflip/chaindata/chains/Chainflip-Perseverance/keystore

Transfer Database Backup to Your New Server

You’ll need to transfer the database backup from your old server to the new one. Use the scp command from your new server to securely copy the file:

# Run these commands on your new server mkdir -p /etc/chainflip/ # Run this command on your new server # Replace OLD_SERVER_IP with your old server's IP address # Replace username with your username on the old server scp username@OLD_SERVER_IP:/etc/chainflip/data.db.backup /etc/chainflip/ # Rename the file to data.db mv /etc/chainflip/data.db.backup /etc/chainflip/data.db

You may need to use sudo to place the file in the destination directory and/or create the directory, or alternatively copy it to your home directory first and then move it with sudo.

On Your New Machine

Download Binaries via APT Repo

gpg --keyserver hkp://keyserver.ubuntu.com --recv-keys BDBC3CF58F623694CD9E3F5CFB3E88547C6B47C6

Verify the key’s authenticity

gpg --export BDBC3CF58F623694CD9E3F5CFB3E88547C6B47C6 | gpg --show-keys

Important: Make sure you see the following output from the terminal:

pub rsa3072/0xFB3E88547C6B47C6 2022-11-08 [SC] Key fingerprint = BDBC 3CF5 8F62 3694 CD9E 3F5C FB3E 8854 7C6B 47C6 uid Chainflip Labs GmbH <dev@chainflip.io> sub rsa3072/0x48249A1599E31935 2022-11-08 [E]

Add Chainflip’s Repo to apt sources list

gpg --export BDBC3CF58F623694CD9E3F5CFB3E88547C6B47C6 | sudo tee /etc/apt/trusted.gpg.d/chainflip-perseverance.gpg echo "deb [arch=amd64 signed-by=/etc/apt/trusted.gpg.d/chainflip-perseverance.gpg] https://repo.chainflip.io/perseverance/jammy jammy main" | sudo tee /etc/apt/sources.list.d/chainflip.list

Installing The Packages

sudo apt-get update sudo apt-get install -y chainflip-cli chainflip-node chainflip-engine

Create the keys directory

sudo mkdir /etc/chainflip/keys

Note: After this you don’t need to generate new Signing Keys, you can skip that phase and continue to the command below with your old Seed Secret.

Recovering Your Keys

THE_OLD_PHRASE << change to your old phrase you have backed up.

chainflip-cli generate-keys \ --path /etc/chainflip/keys \ --seed-phrase 'THE_OLD_PHRASE'

Please note that the Node Key cannot be recovered, a new one will be generated. This will result in a new peer id for your node.

Transfer your Authorship Keys

The Authorship keys are not restored from the seed, they need to be copied from the old machine.

By default, your block authorship keys are stored under /etc/chainflip/chaindata/chains/Chainflip-Perseverance/keystore.

If your node is a current authority, failing to copy these to the new node will prevent it from authoring blocks (and earning rewards).

scp -r username@OLD_SERVER_IP:/etc/chainflip/chaindata/chains/Chainflip-Perseverance/keystore /etc/chainflip/chaindata/chains/Chainflip-Perseverance/keystore

Back Them Up & Copy Your Validator ID

sudo chmod 600 /etc/chainflip/keys/ethereum_key_file sudo chmod 600 /etc/chainflip/keys/signing_key_file sudo chmod 600 /etc/chainflip/keys/node_key_file history -c

Make sure to update your config file with the IP address of the new VPS. Otherwise you’ll get slashed once you start the engine on the new VPS.

Do not run two instances of your Validator at the same time. You will almost certainly be slashed. Make sure you turn off your old server before you turn on your new one.

Stop the old server and start the new one

After that make sure to stop the engine and the node on the old VPS by running:

On the Old VPS

sudo systemctl disable --now chainflip-node.service sudo systemctl disable --now chainflip-engine.service sudo apt purge chainflip-node sudo apt purge chainflip-engine

On the New VPS

  1. Setup the config file as explained in Engine Settings section.
  2. Then start the node and engine services as explained in Start up section.

Purging Chaindata

Do not run rm -rf /etc/chainflip/chaindata/*. This command will delete your session keys, making your node unable to author blocks. Consequently, your node will be forced out of the validator set.

Why would you want to purge the chaindata?

In certain situations, you might need to purge the chaindata and start syncing from a fresh state. Common reasons include:

  • You want to recover from a corrupted database or a full disk.
  • You started to sync a node in full sync mode and want to switch to warp sync.

In the case of a full disk, you might need to purge the chaindata to free up space while you expand your disk. Please note that purging chain isn’t pruning, it simply deletes the chaindata folder. Upon restarting your node, it will begin syncing from scratch and will download the state of old blocks from peers while importing latest blocks (in the case of warp sync).

Purging the chaindata deletes all chain data and initiates syncing from scratch. While using warp sync can expedite the process, purging the chain requires validator downtime. If done at an unfortunate time, it could result in your node being suspended or removed from the validator set altogether.

chainflip-node purge-chain --chain /etc/chainflip/berghain.chainspec.json --base-path /etc/chainflip/chaindata/

Recover Private Keys

All the information you get back when running any of the commands below is private sensitive data. Do not share it with anyone. Whoever has access to these keys can take all your FLIP added to your validator node

You should always have a copy of all your keys somewhere safe outside the Server on which the node is running.

If for whatever reason you lost your keys but still have access to your server, don’t freak out (yet).

You can run some magical commands to get the keys.

Getting Node ID

The following command will return the Node ID of your validator node:

sudo chainflip-node key inspect-node-key --file /etc/chainflip/keys/node_key_file

Getting The Secret Seed

This command will return the private key of your node and other keys. DO NOT share this with anyone.

Chainflip will never ask you to reveal these keys under any circumstances.

To get your Secret Seed that was used to generate your signing key, run the following command:

chainflip-node key inspect "0x$(sudo cat /etc/chainflip/keys/signing_key_file)"

The output will look like this:

Secret Key URI `0x` is account: Network ID: chainflip Secret seed: <YOUR SECRET SEED> # <-- Don't share it with anyone Public key (hex): 0x1803aecb4e11790e73f775206836f25b4348a3290a190319b4b075d9ccbd6349 Account ID: 0x1803aecb4e11790e73f775206836f25b4348a3290a190319b4b075d9ccbd6349 Public key (SS58): cFJQy58CJKJhNCBnV89qQhcQYQSgC6cg8dGWiTJb8xqWsMyQ3 SS58 Address: cFJQy58CJKJhNCBnV89qQhcQYQSgC6cg8dGWiTJb8xqWsMyQ3

Modifying your systemd config

You may want to modify the default command line arguments supplied with the systemd config files. If so, do not edit the config located in /lib/systemd/system/. This will just be immediately overwritten the second you update. Instead, follow the instructions laid out in this StackOverflow post.

How do I override or configure systemd services? 

Last updated on