cok geriden gelen

nano /etc/shells >

/sbin/nologin

root@a~ # usermod -s /sbin/nologin myuser

0- durum nasil su an ? kim ne kullaniyor?
egrep php[1,2,3,4]_select= /usr/local/directadmin/data/users/*/domains/*.conf

1- once bir sey olmasin aman mevcut durumu yedekle
tar czvf ~/domain-conf-backup.tgz /usr/local/directadmin/data/users/**/domains/*.conf

2- Force PHP to be version 1 if no default is set
grep -rF -L php1_select /usr/local/directadmin/data/users/**/domains/*.conf | xargs sed -i.step1 '$ a php1_select=1'

3- /usr/local/directadmin/options duzenle diledigin gibi
cd /usr/local/directadmin/custombuild
./build set php1_release 8.0
./build set php3_release 7.4
./build php

4- Now you want to move all the users who used php1 to use php3, so, you execute this script:
#!/bin/sh
for i in `ls /usr/local/directadmin/data/users/*/domains/*.conf`; do
{
       if ! grep -q ^php1_select $i; then
               echo php1_select=3 >> $i
               continue
       fi

       perl -pi -e "s/^php1_select=1/php1_select=3/" $i
};
done
exit 0

5- Update config files:
cd /usr/local/directadmin/custombuild
./build update
./build rewrite_confs

Adjust the MaxRequestWorkers settings for Apache. The general formula for making the necessary calculation is the following: 

# MaxRequestWorkers = (Total RAM – Memory used for Linux, DB, etc.) / average Apache process size

  • MPM Event: The default ServerLimit value is 16. To increase it, you must also raise MaxRequestWorkers using the following formula: ServerLimit value x 25 = MaxRequestWorkers value. For example, if ServerLimit is set to 20, then MaxRequestWorkers will be 20 x 25 = 500.

Code:

find /home/*/imap/*/*/Maildir/{cur,new} -mtime +30 -type f -exec ls -la {} +

for printing a list of files of emails older than 30 days in a console. If you see non-empty results then you should really check command you use.

For deleting files server-wide:

Code:

find /home/*/imap/*/*/Maildir/{cur,new} -mtime +30 -type f -exec rm -f {} +

rsync -aHAXxv --numeric-ids  --progress -e 'ssh -T -c aes128-gcm@openssh.com -o Compression=no -x ' <source_dir> user@<host>:<dest_dir>
a: archive mode - rescursive, preserves owner, preserves permissions, preserves modification times, preserves group, copies symlinks as symlinks, preserves device files.
H: preserves hard-links
A: preserves ACLs
X: preserves extended attributes
x: don't cross file-system boundaries
v: increase verbosity
--numeric-ds: don't map uid/gid values by user/group name
--progress: show progress during transfer

ssh

T: turn off pseudo-tty to decrease cpu load on destination.
c aes128-gcm@openssh.com: use the weakest but fastest SSH encryption.
o Compression=no: Turn off SSH compression.
x: turn off X forwarding if it is on by default.

Step 1 – Keep the server up to date

# dnf update -y

Step 2 – Install Redis

Run following DNF package manager command to install Redis.

# dnf install redis -y

Step 3 – Change supervised directive from no to systemd

This is important configuration change to make in the Redis configuration file. supervised directive allows you to delivery an init system to manage Redis as a service.

# vi /etc/redis.conf

Find supervised and change it from no to systemd which will looks like:

# If you run Redis from upstart or systemd, Redis can interact with your
# supervision tree. Options:
# supervised no - no supervision interaction
# supervised upstart - signal upstart by putting Redis into SIGSTOP mode
# supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET
# supervised auto - detect upstart or systemd method based on
# UPSTART_JOB or NOTIFY_SOCKET environment variables
# Note: these supervision methods only signal "process is ready."
# They do not enable continuous liveness pings back to your supervisor.
supervised systemd

Save and exit the Redis configuration file.

After editing the file, start and enable the Redis service:

# systemctl start redis

# systemctl enable redis

To verify that Redis has installed successfully, we can run following command:

# redis-cli ping

Output:

PONG

If this is the case, it means we now have Redis running on our server and we can begin configuring it to enhance its security.

Step 4 – Configure a Redit password

Configuring a Redis password enables one of its built-in security features — the auth command — which requires clients to authenticate before being allowed access to the database. Like the bind setting, the password is configured directly in Redis’s configuration file, /etc/redis.conf. Reopen that file:

# vi /etc/redis.conf

Find requirepass

# requirepass foobared

Uncomment it by removing the #, and change foobared to a very strong password of your choosing.

After setting the password, save and close the file then restart Redis:

# systemctl restart redis

To test that the password works, open the Redis client:

# redis-cli

A sequence of commands used to verify whether the Redis password is working is as follows. Before authenticating, the first command tries to set a key to a value:

127.0.0.1:6379> set key1 23

That won’t work as you have not yet authenticated, so Redis returns an error:

Output

(error) NOAUTH Authentication required.

The following command authenticates with the password specified in the Redis configuration file:

127.0.0.1:6379> auth your_redis_password

Redis will acknowledge that you have been authenticated:

Output

OK

After that, running the previous command again should be successful:

127.0.0.1:6379> set key1 23

Output

OK

The get key1 command queries Redis for the value of the new key:

127.0.0.1:6379> get key1

Output

"23"

This last command exits redis-cli. You may also use exit:

127.0.0.1:6379> quit

We have successfully seen how to install Redis on AlmaLinux 8 and configure it.

page:

https://pve.proxmox.com/wiki/Cluster_Manager

standart cluster commands

create cluster:

pvecm create CLUSTERNAME

check state of the new cluster:

pvecm status

join node to cluster:

pvecm add IP-ADDRESS-CLUSTER

check:

pvecm status
pvecm nodes

remove a cluster node:

pvecm delnode hp4
Killing node 4

Kill an old cluster without reinstalling

systemctl stop pve-cluster
systemctl stop corosync

Start the cluster file system again in local mode:

pmxcfs -l

Delete the corosync configuration files:

rm /etc/pve/corosync.conf
rm -r /etc/corosync/*

You can now start the file system again as a normal service:

killall pmxcfs
systemctl start pve-cluster

The node is now separated from the cluster. You can deleted it from any remaining node of the cluster with:

pvecm delnode oldnode
If the command fails due to a loss of quorum in the remaining node, you can set the expected votes to 1 as a workaround:

pvecm expected 1
And then repeat the pvecm delnode command.

Now switch back to the separated node and delete all the remaining cluster files on it. This ensures that the node can be added to another cluster again without problems.

rm /var/lib/corosync/*
As the configuration files from the other nodes are still in the cluster file system, you may want to clean those up too. After making absolutely sure that you have the correct node name, you can simply remove the entire directory recursively from /etc/pve/nodes/NODENAME.

on the new node take a backup of /etc/pve/nodes/YOURNEWNODENAME/qemu-server and delete all the files in it and try to join the server once done restore the file to original location, it worked for me!

  1. NewNode: cp -rpf /etc/pve/nodes/YOURNEWNODENAME/qemu-server /root/
  2. NewNode: rm -rf /etc/pve/nodes/YOURNEWNODENAME/qemu-server/*
  3. OldNode: get the “Join Information” from your main
  4. NewNode: click on “Join Cluster” and add the info copied earlier and join the cluster
  5. NewNode: cp -rpf /root/qemu-server /etc/pve/nodes/YOURNEWNODENAME/

and you are done! just Make sure you have no conflicting VM / LXC IDs.