Step 1 – Keep the server up to date

# dnf update -y

Step 2 – Install Redis

Run following DNF package manager command to install Redis.

# dnf install redis -y

Step 3 – Change supervised directive from no to systemd

This is important configuration change to make in the Redis configuration file. supervised directive allows you to delivery an init system to manage Redis as a service.

# vi /etc/redis.conf

Find supervised and change it from no to systemd which will looks like:

# If you run Redis from upstart or systemd, Redis can interact with your
# supervision tree. Options:
# supervised no - no supervision interaction
# supervised upstart - signal upstart by putting Redis into SIGSTOP mode
# supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET
# supervised auto - detect upstart or systemd method based on
# UPSTART_JOB or NOTIFY_SOCKET environment variables
# Note: these supervision methods only signal "process is ready."
# They do not enable continuous liveness pings back to your supervisor.
supervised systemd

Save and exit the Redis configuration file.

After editing the file, start and enable the Redis service:

# systemctl start redis

# systemctl enable redis

To verify that Redis has installed successfully, we can run following command:

# redis-cli ping

Output:

PONG

If this is the case, it means we now have Redis running on our server and we can begin configuring it to enhance its security.

Step 4 – Configure a Redit password

Configuring a Redis password enables one of its built-in security features — the auth command — which requires clients to authenticate before being allowed access to the database. Like the bind setting, the password is configured directly in Redis’s configuration file, /etc/redis.conf. Reopen that file:

# vi /etc/redis.conf

Find requirepass

# requirepass foobared

Uncomment it by removing the #, and change foobared to a very strong password of your choosing.

After setting the password, save and close the file then restart Redis:

# systemctl restart redis

To test that the password works, open the Redis client:

# redis-cli

A sequence of commands used to verify whether the Redis password is working is as follows. Before authenticating, the first command tries to set a key to a value:

127.0.0.1:6379> set key1 23

That won’t work as you have not yet authenticated, so Redis returns an error:

Output

(error) NOAUTH Authentication required.

The following command authenticates with the password specified in the Redis configuration file:

127.0.0.1:6379> auth your_redis_password

Redis will acknowledge that you have been authenticated:

Output

OK

After that, running the previous command again should be successful:

127.0.0.1:6379> set key1 23

Output

OK

The get key1 command queries Redis for the value of the new key:

127.0.0.1:6379> get key1

Output

"23"

This last command exits redis-cli. You may also use exit:

127.0.0.1:6379> quit

We have successfully seen how to install Redis on AlmaLinux 8 and configure it.

page:

https://pve.proxmox.com/wiki/Cluster_Manager

standart cluster commands

create cluster:

pvecm create CLUSTERNAME

check state of the new cluster:

pvecm status

join node to cluster:

pvecm add IP-ADDRESS-CLUSTER

check:

pvecm status
pvecm nodes

remove a cluster node:

pvecm delnode hp4
Killing node 4

Kill an old cluster without reinstalling

systemctl stop pve-cluster
systemctl stop corosync

Start the cluster file system again in local mode:

pmxcfs -l

Delete the corosync configuration files:

rm /etc/pve/corosync.conf
rm -r /etc/corosync/*

You can now start the file system again as a normal service:

killall pmxcfs
systemctl start pve-cluster

The node is now separated from the cluster. You can deleted it from any remaining node of the cluster with:

pvecm delnode oldnode
If the command fails due to a loss of quorum in the remaining node, you can set the expected votes to 1 as a workaround:

pvecm expected 1
And then repeat the pvecm delnode command.

Now switch back to the separated node and delete all the remaining cluster files on it. This ensures that the node can be added to another cluster again without problems.

rm /var/lib/corosync/*
As the configuration files from the other nodes are still in the cluster file system, you may want to clean those up too. After making absolutely sure that you have the correct node name, you can simply remove the entire directory recursively from /etc/pve/nodes/NODENAME.

on the new node take a backup of /etc/pve/nodes/YOURNEWNODENAME/qemu-server and delete all the files in it and try to join the server once done restore the file to original location, it worked for me!

  1. NewNode: cp -rpf /etc/pve/nodes/YOURNEWNODENAME/qemu-server /root/
  2. NewNode: rm -rf /etc/pve/nodes/YOURNEWNODENAME/qemu-server/*
  3. OldNode: get the “Join Information” from your main
  4. NewNode: click on “Join Cluster” and add the info copied earlier and join the cluster
  5. NewNode: cp -rpf /root/qemu-server /etc/pve/nodes/YOURNEWNODENAME/

and you are done! just Make sure you have no conflicting VM / LXC IDs.

change vmid 116 to 516 / backup-restore — too long! what to do?

if vm is raw local disk image:
1- qm shutdown 116
2- cd /var/lib/vz/images/
4- mv 116/ 516
5- mv /etc/pve/nodes/e1/qemu-server/116.conf /etc/pve/nodes/e1/qemu-server/516.conf
7- nano /etc/pve/nodes/e1/qemu-server/516.conf
change
scsi0: local:116/vm-116-disk-0.raw,iothread=1,size=600G to scsi0: local:516/vm-516-disk-0.raw,iothread=1,size=600G
8- cd 516
9- mv mv vm-116-disk-0.raw vm-516-disk-0.raw

congrats!

if vm is ZFS volume:
1- qm shutdown 116
2- zfs list -t all

rename it with zfs rename:

zfs rename rpool/data/vm-116-disk-0 rpool/data/vm-516-disk-0
3- mv /etc/pve/nodes/e1/qemu-server/116.conf /etc/pve/nodes/e1/qemu-server/516.conf
7- nano /etc/pve/nodes/e1/qemu-server/516.conf
change
scsi0: local-zfs:vm-116-disk-0,size=300G to scsi0: local-zfs:vm-516-disk-0,size=300G

congrats!

Ok, I have this figured out and the result is largely why I started this thread – every procedure I have found is incomplete or has major assumptions that may not be obvious. So, future Googlers, here is the deal. Do this is in this order on the respective nodes.

NODE = the node that is getting the new IP.
CLUSTER = all other Proxmox nodes that will maintain quorum and can talk to one another throughout this procedure.
ONE CLUSTER = any one single node within CLUSTER

On NODE

1. Edit /etc/pve/corosync.conf.
2. Update the IP for NODE.
3. Increment

Code:

config_version:

This change should push out a new corosync.conf to all nodes in CLUSTER. Confirm all nodes in CLUSTER have the new /etc/pve/corosync.conf. At this point the cluster will be broken. If you run

Code:

 pvecm status

on the NODE, you will see it can’t find the rest of the nodes in the cluster. If you run

Code:

 pvecm status

on CLUSTER you will see they can all see each other but NODE is missing.

Still on NODE
1. Edit /etc/network/interfaces and update the IP to the desired IP.
2. Edit /etc/hosts and update the IP to the new IP.
3.

Code:

ifdown vmbr0; ifup vmbr0

to get your interface to have the new static IP. Change “vmbr0” to the name of your interface.
4. Restart corosync and pve-cluster.

Code:

systemctl restart corosync

Code:

systemctl restart pve-cluster

On CLUSTER
1. Restart corosync on EVERY member of CLUSTER.

Code:

systemctl restart corosync

At this point

Code:

pvecm status

should show all nodes as being in the cluster, good quorum, and NODE has its proper IP. Be patient as this can take a minute. To be extra sure, run

Code:

cat /etc/pve/.members

on NODE and this should show all the correct IPs.

Additional cleanup.

On NODE:

1. Optional: Edit /etc/issue. Update to the new IP on NODE. This ensures the console login screen shows the right IP.
2. Edit /etc/pve/storage.cfg and update any references to the old NODE IP – likely only an issue if you run PVE and PBS next to each other.
3. Optional: Edit /etc/pve/priv/known_hosts and update the IP of NODE.

Other weirdness: In this process I have found sometimes VMs and containers lose their network connection and need to be rebooted. I haven’t found a good way to avoid this or fix it beyond a VM/CT reboot. If anyone has an idea to make this 100% zero downtime (or near zero downtime), let me know and I’ll add that step.

second one

different writer

Stop the cluster services

systemctl stop pve-cluster
systemctl stop corosync

Mount the filesystem locally

pmxcfs -l

Edit the network interfaces file to have the new IP information

vi /etc/network/interfaces

Replace any host entries with the new IP addresses

vi /etc/hosts

Edit the corosync file and replace the old IPs with the new IPs for any changed hosts

BE SURE TO INCREMENT THE config_version: x LINE BY ONE TO ENSURE THE CONFIG IS NOT OVERWRITTEN

vi /etc/pve/corosync.conf

Edit the known hosts file to have the new IP(s)

/etc/pve/priv/known_hosts

If using ceph, edit the ceph configuration file to reflect the new network

(thanks u/FortunatelyLethal)

:%s/192.168.1./192.168.2./g <- vi command to replace all instances

vi /etc/ceph/ceph.conf

If you want to be granular… fix the IP in /etc/issue

vi /etc/issue

Verify there aren’t any stragglers with the old IP hanging around

cd /etc
grep -R ‘192.168.1.’ *
cd /var
grep -R ‘192.168.1.’ *

Reboot the system to cleanly restart all the networking and services

reboot

Referenced pages:

– https://forum.proxmox.com/threads/change-cluster-nodes-ip-addresses.33406/

– https://pve.proxmox.com/wiki/Cluster_Manager#_remove_a_cluster_node

third one:

Change node’s IP address #

  1. Update the node’s IP address in the following files:
    • /etc/network/interfaces
    • /etc/hosts
  2. Stop the cluster and force local mode so you can update the cluster configuration:systemctl stop pve-cluster systemctl stop corosync pmxcfs -l
  3. In /etc/pve/corosync.conf update:
    • The node’s IP address
    • The IP addresses of other nodes in the cluster (if they’re changing)
    • The config_version, incrementing it by 1
  4. Restart the cluster:killall pmxcfs systemctl start pve-cluster

Change IP address on subsequent nodes #

If you’re changing the IP address of multiple nodes in a cluster, repeat the instructions in this section for every additional node in the cluster. The process to follow depends on if the cluster’s quorum is still intact.

Intact quorum #

Update the node’s IP address in the following files:

  • /etc/network/interfaces
  • /etc/hosts

SimpleHTTPServer with support for Range requests

Quickstart:

$ pip install rangehttpserver
$ python -m RangeHTTPServer
Serving HTTP on 0.0.0.0 port 8000 ...

or

apt install python3-rangehttpserver

python3 -m RangeHTTPServer

Ek: Buna yazilacak bir kac sey daha var

debian bookworm repolarindaki versyon eski o yuzden port parametresi giremiyoz yani ayni anda 1 den fazla calistiramiyoruz. o yuzden soyle yaptim

githubdan son release indirim

py leri degistirdim son surum oldu 🙂

nerede dizinler?

/usr/lib/python3/dist-packages/RangeHTTPServer/__init__.py
/usr/lib/python3/dist-packages/RangeHTTPServer/__main__.py

port girmek icin : python3 -m RangeHTTPServer 7654

LEAPP ile centos7 lerimizi Almalinux 8 e yukseltelim mi ?

1- 

sudo curl -o /etc/yum.repos.d/CentOS-Base.repo https://el7.repo.almalinux.org/centos/CentOS-Base.repo
sudo yum upgrade -y
sudo reboot

2- 

sudo yum install -y http://repo.almalinux.org/elevate/elevate-release-latest-el$(rpm --eval %rhel).noarch.rpm


3- 

sudo yum install -y leapp-upgrade leapp-data-almalinux


4- 

leapp preupgrade



Burada cesitli olaylar olacak oncelikle sirayla bunlari yapalim

sudo rmmod pata_acpi
echo PermitRootLogin yes | sudo tee -a /etc/ssh/sshd_config
sudo leapp answer --section remove_pam_pkcs11_module_check.confirm=True

sonra kernel devel cok var dedi eski kernelleri silelim

yum install yum-utils
package-cleanup --oldkernels --count=1

sonra gene hata verdi bir sunucuda vermemesi gerekirdi
bunlari yaptim

 Title: Leapp detected loaded kernel drivers which have been removed in RHEL 8. Upgrade cannot proceed. Summary: Support for the following RHEL 7 device drivers has been removed in RHEL 8:

lsmod | grep -q pata_acpi && rmmod pata_acpi
lsmod | grep -q floppy && rmmod floppy

sonra python3 eyvah dedi

alternatives - set python /usr/bin/python3

ya da buda olabilir ki oldu

Leapp upgrade error caused by important modules from kernel drivers

cp /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img.backup
dracut -N -f
dracut -f --regenerate-all


5-

  leapp upgrade

6-

sorunsuz biterse eger kontrol et

cat /etc/redhat-release
cat /etc/os-release
uname -a

EKLER MESELA?

reboot ettik network kartinin adi degisti.
nmtui ile yeni karta ip tanimladiktan sonra
directadmin.conf icinden ens18 yaptik mesela calisti bla bla.

sonrasinda kontrol hersey calisiyor gibi...



https://www.ip2location.com/free/visitor-blocker

  1. Head to the IP2Location Firewall IP list tool.
  2. Select the country you want to block or allow.
  3. Choose between IPv4 and IPv6 in the IP Version menu.
  4. Choose the format in the Output Format menu. The choice depends on whether you want to block or allow visitors from those countries.
  5. Click Download.

#!/bin/bash

# Set the input file and command
INPUT_FILE="users.txt"
COMMAND="/usr/local/directadmin/directadmin admin-backup --destination=/home/admin/admin_backups/$

# Read the users from the file
while IFS= read -r username; do
  # Run the command for each user
  ${COMMAND} --user=${username}
done < "${INPUT_FILE}"

users.txt

user1
user2
blabla
hort
zort
mort

llama3 latest yazdi
conversation su sekilde calisti:

You

help me create a bash script where the command is /usr/local/directadmin/directadmin admin-backup –destination=/home/admin/admin_backups –user=testuser1 –user=testuser2 –user=testuser3 testuser1 testuser2 testuser3 lines are read from an external text file each on its own line

llama3:latest

Here is a bash script that reads the usernames from an external text file and runs the command for each user:

bash#!/bin/bash

# Set the input file and command
INPUT_FILE="users.txt"
COMMAND="/usr/local/directadmin/directadmin admin-backup --destination=/home/admin/admin_backups"

# Read the users from the file
while IFS= read -r username; do
  # Run the command for each user
  ${COMMAND} --user=${username}
done < "${INPUT_FILE}"

Let me explain what’s going on:

  1. The first line, #!/bin/bash, specifies the interpreter that should be used to run the script.
  2. We set the input file (users.txt) and command (COMMAND variable) at the top of the script.
  3. The while loop reads each line from the input file using IFS= read -r username. The -r option tells read to treat backslashes as literal characters, rather than interpreting them as escape sequences. This is important if your usernames contain special characters.
  4. Inside the loop, we run the command for each user by concatenating the COMMAND variable with the --user= parameter and the current username.

To use this script, simply create a file called users.txt in the same directory as the script, with one username per line:

texttestuser1
testuser2
testuser3

Then, make the script executable by running chmod +x scriptname.sh, and then run it by typing ./scriptname.sh.

Note: Make sure to replace users.txt with the actual path to your input file if it’s located elsewhere.

tarih 4 ekim 24

# Switching / updating PHP versions

Make a backup

    tar czvf ~/domain-conf-backup.tgz /usr/local/directadmin/data/users/**/domains/*.conf

Force PHP to be version 1 if no default is set

    grep -rF -L php1_select /usr/local/directadmin/data/users/**/domains/*.conf | xargs sed -i.step1 '$ a php1_select=1'

Change new version to temporary position

    grep -rF -l 'php1_select=3' /usr/local/directadmin/data/users/**/domains/*.conf | xargs sed -i.step2 's/^php1_select=3/php1_select=31/'

Change old version to old position

    grep -rF -l 'php1_select=1' /usr/local/directadmin/data/users/**/domains/*.conf | xargs sed -i.step3 's/^php1_select=1/php1_select=3/'

Change new version to new position

    grep -rF -l 'php1_select=31' /usr/local/directadmin/data/users/**/domains/*.conf | xargs sed -i.step4 's/^php1_select=31/php1_select=1/'

Adjust PHP versions inside custombuild

    DO THIS MANUALLY

Rewrite config files

    cd /usr/local/directadmin/custombuild && ./build rewrite_confs