basit ve hizli bir sekilde 32bit ubuntu veya debian sunucumuza 3proxy kuralim

3proxy hakkinda detayli bilgi resmi adresinde: http://www.3proxy.ru/

bizim kuracagimiz 3proxy’nin ozellikleri:
– 3proxy-0.7 surumu olacak (bu script yazildiginda en guncel budur)
– port 3128 den calisacak
– browserda ilk kullanimda kullanici adi ve sifre isteyip izin verecek
– kesinlikle log tutmayacak
– kesinlikle tam anonim olacak ve hic bir durumda proxy izi barindirmayacak
– yandex free DNS kullanacak

kurulum su sekildedir:
1- 32 bit ubuntu/debian guncel surum hazir edilir,

2- wget http://internetdede.com/3pr/3proxyinstaller.sh
chmod +x 3proxyinstaller.sh
./3proxyinstaller.sh

3- kullanici adi ve sifre belirleyelim

nano /etc/3proxy/3proxy.cfg

on tanimli olarak kullanici adi: haciosman
sifresi: muhittin123
kullanici adi ve sifreyi dilediginiz gibi degistiriniz,

4- kurulum zaten 3proxy servisimizi sunucu acilisinda calisacak sekilde ayarladi,
ama hemen baslatmak icin su komutu girelim:

/etc/3proxy/3proxy /etc/3proxy/3proxy.cfg &

5- islem tamamlandi. gule gule kullaniniz…
portumuz 3128 proxy adresimiz sunucumuzun ip adresidir.

PERFORMANS HAKKINDA GUNCELLEME:
3proxy bu hali ile 128mb ramli bir vps de bile gayet guzel calisiyor.
neredeyse hic sistem kaynagi tuketmiyor.
ANCAK performans konusunda sikintilari oldugunu tespit ettim.
Bazen gec cevap vermesi, icerigi yalan yanlis gostermesi vb. vb.
benim kurulumumdan kaynaklanmadigini dusunuyorum.
Kurulum yaptigim test sunucumda 256Mb ram ve 1gbit baglanti mevcut.
Sonuc olarak gunun sonunda. SQUID3 cok daha sorunsuz calisiyor diye dusunuyorum.
Biraz daha fazla sistem kaynagi ayirip SQUID3 ile devam etmek cok daha mantikli..

https://www.shukko.com/x3/2010/05/08/anonymous-proxy-using-squid-3-on-ubuntu-9-04-server-with-web-based-auth-user-pass-soruyor-d/


1- install centos
2- yum update / /etc/resolv.conf settings bla. bla.
3- yum -y install mysql mysql-server
chkconfig --levels 235 mysqld on
/etc/init.d/mysqld start
mysql_secure_installation

4- yum -y install httpd
chkconfig --levels 235 httpd on
/etc/init.d/httpd start

5- yum -y install php php-mysql php-gd php-imap php-ldap php-odbc php-pear php-xml php-xmlrpc php-mbstring php-mcrypt php-mssql php-snmp php-soap php-tidy curl curl-devel php-pecl-apc
/etc/init.d/httpd restart

nano /var/www/html/info.php
http://tr2.php.net/phpinfo

http://bla.bla/info.php

6- rpm --import http://dag.wieers.com/rpm/packages/RPM-GPG-KEY.dag.txt
yum -y install http://pkgs.repoforge.org/rpmforge-release/rpmforge-release-0.5.3-1.el6.rf.x86_64.rpm
yum -y install phpmyadmin

nano /etc/httpd/conf.d/phpmyadmin.conf
#
# Web application to manage MySQL
#

#
# Order Deny,Allow
# Deny from all
# Allow from 127.0.0.1
#

Alias /phpmyadmin /usr/share/phpmyadmin
Alias /phpMyAdmin /usr/share/phpmyadmin
Alias /mysqladmin /usr/share/phpmyadmin

nano /usr/share/phpmyadmin/config.inc.php
[...]
/* Authentication type */
$cfg['Servers'][$i]['auth_type'] = 'http';
[...]

/etc/init.d/httpd restart

http://bla.bla/phpmyadmin/

DONE!

Basic
77.88.8.8
77.88.8.1
Quick and reliable DNS
Safe
77.88.8.88
77.88.8.2
Protection from virus and fraudulent content
Family
77.88.8.7
77.88.8.3
Without adult content

daha once bir yazim var bu konu hakkinda
ama yeni birseyler ogrendim
eklemek gerekir

ssh ile kolay socks proxy yapalim

ssh -fCND 127.0.0.1:15428 user@sunucu.com

-D binds SSH to the IP and port specified
-f tells it to become a background daemon process
-N tells it that no commands are going to be run
-C enables compression so web browsing will be slightly faster

son olarak browserimize proxy olarak eklemek icin
socks5 proxy seciyoruz
portumuz 15428
baglantimiz 127.0.0.1

gule gule kullaniniz.

socks 4/5 proxy using erlang? Erlang ne ulan?

What is Erlang?

Erlang is a programming language used to build massively scalable soft real-time systems with requirements on high availability. Some of its uses are in telecoms, banking, e-commerce, computer telephony and instant messaging. Erlang’s runtime system has built-in support for concurrency, distribution and fault tolerance.

www.erlang.org

 


apt-get install erlang-base
wget https://www.shukko.com/x3/wp-content/uploads/Socks2.tar.gz
tar zxvf Socks2.tar.gz; rm -f Socks2.tar.gz; cd Socks2
# edit socks.erl -> 'start() -> start(4, 8899).' -> 4 - thread, 8899 - port
./run.sh

kodu inceledim okudum anladigim kadariyla pek guzel kod, zararsiz isini yapiyor.
ama daha fazla detaya ihtiyacim var..
auth mekanizmasi yokmu bunun yahu?

kurulum icin elimizde uygun bir makinamiz var.

bu makinamizda 4 adet 2tb data diskimiz mevcut,

biz bu disklerimizi software raid 10 olarak proxmox altinda calistirmak istiyoruz

daha onceki bir yazimda once debian wheezy kurmus  daha sonra onun uzerine lvm raid yapip isi hallettmistim

fakat bu bana cazip gelmiyor, bu tur raid kurulumu guncellemelerde sorun cikartiyor.

O yuzden bu kez yapmak istedigim oncelikle 4 diskimizin 1.cisine normal sekilde proxmox kurduktan sonra sistemi proxmox calisirken raid 10 haline getirmek

adimlar su sekildedir:

1- guncel proxmox isosu download edilir
2- /dev/sda uzerine normal proxmox kurulumu yapilir
3- hersey calisir hale geldikden sonra ssh ile sisteme baglanilir
4- proxmox icin gerekli repo ayarlari yapilir ve sistem guncellenir son olarak mdadm paketleri sisteme yuklenir

nano /etc/apt/sources.list
------------
deb http://ftp.de.debian.org/debian wheezy main contrib
# security updates
deb http://security.debian.org/ wheezy/updates main contrib
# PVE pve-no-subscription repository provided by proxmox.com, NOT recommended for production use
deb http://download.proxmox.com/debian wheezy pve-no-subscription
-------------

apt-get update
apt-get dist-upgrade

apt-get install mdadm

5-bu asamada partition tablolarimizi disk1 den disk2,3,ve 4 e kopyalayacagiz
ancak bundan once eger sistemde olurda daha onceden bir mdadm yapilandirmasi varsa eski disklerde bunu halletmek icin diskleri sifirlayalim, bu komut disklerde eski partitionlari ve mbr yi silecek

# dd if=/dev/zero of=/dev/sdx bs=512 count=1

bundan sonra partition tablolarimizi kopyalayalim 4disk icin su sekilde

sfdisk -d /dev/sda | sfdisk -f /dev/sdb
sfdisk -d /dev/sda | sfdisk -f /dev/sdc
sfdisk -d /dev/sda | sfdisk -f /dev/sdd

NOT NOT NOT // GUNCELLEME GUNCELLEME

EGER PARTITIONLARIMIZ OLDUDA GPT OLDU ISE

gdisk kur

Copy the partition scheme from /dev/sda to /dev/sdb:

sgdisk -R=/dev/sdb /dev/sda

buda zorunlu Now randomizes the GUID:

gdisk -G /dev/sdb

 

6- 3 diskimizdeki partition formatini RAID olarak belirleyelim

sfdisk -c /dev/sdb 1 fd
sfdisk -c /dev/sdb 2 fd
sfdisk -c /dev/sdc 1 fd
sfdisk -c /dev/sdc 2 fd
sfdisk -c /dev/sdd 1 fd
sfdisk -c /dev/sdd 2 fd

NOT NOT NOT // GUNCELLEME GUNCELLEME

GPT icin soyle yaptim
Belki baska kolay yolu vardir , bulamadim noobum.

gdisk /dev/sdb
t ye bas
partition sec 1 > FD00 yap

tum disklerdeki tum partititonlara yapinca w kaydet q cik

7- Raid yapilandirmamizi INITIALIZE edelim
ONEMLI NOT: eger daha onceden disk yapilandirmamizda raid kullanmis isek
mdadm yi sisteme entegre ettigimizde bunlar mdadm.conf dosyamiz icine otomatik olarak yazilmis olabilir, o yuzden raid yapimizi initialize ettikten sonra /etc/mdadm/mdadm.conf dosyamizi incelememiz gerek
eger gereksiz eski raid array uuid bilgisi var ise bunlari silmeli ve yeni yapiyi icine olusturmaliyiz.

mdadm --create /dev/md0 --level=1 --raid-disks=4 missing /dev/sdb1 /dev/sdc1 /dev/sdd1
mdadm --create /dev/md1 --level=10 --raid-disks=4 missing /dev/sdb2 /dev/sdc2 /dev/sdd2

conf dosyamiza goz atalim eski yapilar varsa silelim, yeni yapimizi kayit etmek icin

mdadm --examine --scan >> /etc/mdadm/mdadm.conf

islem tamamdir

8- /boot dizinimizi /dev/md0 uzerine tasiyalim ve fstab dosyamizi /dev/md0 dan boot edecek hale getirelim

mkfs.ext3 /dev/md0
mkdir /mnt/md0
mount /dev/md0 /mnt/md0
cp -ax /boot/* /mnt/md0

sonra

nano /etc/fstab su sekilde olmasi gerek, basitce UUID satirimizi devre disi birakiyoruz
-----------------
# /dev/pve/root / ext3 errors=remount-ro 0 1
/dev/pve/data /var/lib/vz ext3 defaults 0 1
#UUID=cc425576-edf6-4895-9aed-ccfd89aeb0fb /boot ext3 defaults 0 1
/dev/md0 /boot ext3 defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
-------------------

9- sistemi reboot ediyoruz.
eger hersey yolunda giderse sistemimiz /dev/md0 uzerinden boot edecek demektir.
bravo ciddi bir asamayi hallettik |:)

sistem acildikdan sonra gerekli kontrolleri yapalim

mount | grep boot
dedigimizde asagidaki gibi bir satir cikmasi gerek
/dev/md0 on /boot type ext3 (rw,relatime,errors=continue,user_xattr,acl,barrier=0,data=ordered)

bunu gorduysek islem tamam demektir.

10- simdi gruba /dev/md0 dan boot etmek istedigimizi soyleyelim kisaca asagidaki komutlari girelim


echo '# customizations' >> /etc/default/grub
echo 'GRUB_DISABLE_LINUX_UUID=true' >> /etc/default/grub
echo 'GRUB_PRELOAD_MODULES="raid dmraid"' >> /etc/default/grub
echo raid1 >> /etc/modules
echo raid10 >> /etc/modules
echo raid1 >> /etc/initramfs-tools/modules
echo raid10 >> /etc/initramfs-tools/modules
grub-install /dev/sda
grub-install /dev/sdb
grub-install /dev/sdc
grub-install /dev/sdd
update-grub
update-initramfs -u

islem tamam

11- simdi /dev/sda1 i raid arrayimiz icine katmaliyiz

sfdisk -c /dev/sda 1 fd
mdadm –add /dev/md0 /dev/sda1

12- simdiki adimdan once burada cok uzun vakit alacak bir lvm tasima islemi yapacagimizdan
screen
kurup calistirip bunun altinda islemleri yapmakta fayda var.

LVM yi /dev/md1 uzerine tasiyacagiz


pvcreate /dev/md1
vgextend pve /dev/md1
pvmove /dev/sda2 /dev/md1

pvmove cok uzun surecek. bu arada yatip uyumak en iyisi, ya da disari cikip hava alin. 2tb disk ve guncel bir islemci ile en az 2-3 saat surecektir 🙂

islem bittikten sonra sda2 uzerindeki pveyi reduce edip remove edecegiz

vgreduce pve /dev/sda2
pvremove /dev/sda2

13- en son asamada /dev/sda2 yide raid yapimiz icine katacagiz

sfdisk --change-id /dev/sda 2 fd
mdadm --add /dev/md1 /dev/sda2

14- bundan sonra raidimizin rebuild edisini guzel guzel izleyebiliriz 🙂

watch -n 5 cat /proc/mdstat

hatta dilersek bunu biraz hizlandirabiliriz

echo 800000 > /proc/sys/dev/raid/speed_limit_min
echo 1600000 > /proc/sys/dev/raid/speed_limit_max

gule gule kullaniniz.
PROXMOX SOFTWARE RAID 10 KURULUMUNUZ KULLANIMA HAZIRDIR

EK:
15– bu islemleri yaptiktan sonra df -h komutumuza makinamiz su sekilde yanit veriyor

Filesystem Size Used Avail Use% Mounted on
udev 10M 0 10M 0% /dev
tmpfs 3.2G 416K 3.2G 1% /run
/dev/mapper/pve-root 20G 1.2G 18G 7% /
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 6.3G 3.1M 6.3G 1% /run/shm
/dev/mapper/pve-data 1.8T 196M 1.8T 1% /var/lib/vz
/dev/md0 495M 58M 412M 13% /boot
/dev/fuse 30M 12K 30M 1% /etc/pve

/var/lib/vz/ 2TB mi? bir yerde yanlislik var 4 TB olmali idi 🙂
Eh normal, Kalan raid 10 diskimiz bos vg alani olarak duruyor. BKNZ:

vgdisplay
--- Volume group ---
VG Name pve
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 11
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 3
Max PV 0
Cur PV 1
Act PV 1
VG Size 3.64 TiB
PE Size 4.00 MiB
Total PE 953544
Alloc PE / Size 472709 / 1.80 TiB
Free PE / Size 480835 / 1.83 TiB
VG UUID 16k1ou-8jQ7-OB63-Jesb-s7p4-SOPW-deKGGc

Pek Guzel, ne yapmamiz lazim? Bu bos alanimizi mevcut LVM alanimiza dahil edip /var/lib/vz/ altinda kullanilabilir hale getirmeliyiz.
Bu asamada linux LVM engin tecrubelerimizden faydalanacagiz.

once standart komutlar ile duruma bakalim:

lvdisplay
pvdisplay
vgdisplay


root@pmd04:~# vgs
VG #PV #LV #SN Attr VSize VFree
pve 1 3 0 wz--n- 3.64t 1.83t
root@pmd04:~# pvs
PV VG Fmt Attr PSize PFree
/dev/md1 pve lvm2 a-- 3.64t 1.83t
root@pmd04:~# lvs
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
data pve -wi-ao--- 1.78t
root pve -wi-ao--- 20.00g
swap pve -wi-ao--- 8.00g

sonra
VG bos alanimizi extend edelim ve daha sonra LV mize dahil edelim

root@pmd04:~# lvextend -l +100%FREE /dev/pve/data
Extending logical volume data to 3.61 TiB
Logical volume data successfully resized
root@pmd04:~# resize2fs /dev/pve/data
resize2fs 1.42.5 (29-Jul-2012)
Filesystem at /dev/pve/data is mounted on /var/lib/vz; on-line resizing required
old_desc_blocks = 118, new_desc_blocks = 232
Performing an on-line resize of /dev/pve/data to 969089024 (4k) blocks.
The filesystem on /dev/pve/data is now 969089024 blocks long.
root@pmd04:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 10M 0 10M 0% /dev
tmpfs 3.2G 416K 3.2G 1% /run
/dev/mapper/pve-root 20G 1.2G 18G 7% /
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 6.3G 3.1M 6.3G 1% /run/shm
/dev/mapper/pve-data 3.6T 197M 3.6T 1% /var/lib/vz
/dev/md0 495M 58M 412M 13% /boot
/dev/fuse 30M 12K 30M 1% /etc/pve
root@pmd04:~#

cok guzel mi oldu ne oldu ?
evet oldu
tamam o zaman |:)

————————————-

EK – GPT alamanca

————————————-

Proxmox 3.1 auf Softraid mit GPT

unterstützt offiziell kein , man kann es aber nach der Installation in ein verwandeln:
http://boffblog.wordpress.com/2013/08/22/how-to-install-proxmox-ve-3-0-on-software-raid/

Bei großen Festplatten verwendet proxmox aber GPT zur Partitionierung. Daher erhält man schon beim kopieren der Partitionstabelle eine Fehlermeldung:
“WARNING: () detected on ‘/dev/sda’! The util sfdisk doesn’t support . Use GNU Parted.”
Abhilfe schafft die Verwendung von gdisk. Für was genau die 1. Partition belegt ist weiss ich nicht. Boot lag bei mir auf /dev/sda2 und die lvm-Volumes lagen auf /dev/sda3
Somit habe ich folgende Befehle verwendet:

apt-get update
apt-get dist-upgrade
apt-get install mdadm gdisk
sgdisk -R /dev/sdb /dev/sda
!!!ACHTUNG Reihenfolge beachten, wird in dem Fall von recht nach links kopiert
sgdisk -G /dev/sdb
dd if=/dev/sda1 of=/dev/sdb1
NOTWENDIG?
sgdisk -t 2:fd00 /dev/sdb
sgdisk -t 3:fd00 /dev/sdb

Reboot notwendig?

mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdb2
mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb3
mkfs.ext3 /dev/md0
mkdir /mnt/md0
mount /dev/md0 /mnt/md0
cp -ax /boot/* /mnt/md0

/etc/fstab editieren und die UUID vor /boot durch /dev/md0 ersetzen
und nochmal booten!

echo ‘GRUB_DISABLE_LINUX_UUID=true’ >> /etc/default/grub
echo ‘GRUB_PRELOAD_MODULES="raid dmraid"‘ >> /etc/default/grub
echo raid1 >> /etc/modules
echo raid1 >> /etc/initramfs-tools/modules
grub-install /dev/sda
grub-install /dev/sdb
update-grub
update-initramfs -u
mdadm --add /dev/md0 /dev/sda2
pvcreate /dev/md1
vgextend pve /dev/md1
pvmove /dev/sda3 /dev/md1
vgreduce pve /dev/sda3
pvremove /dev/sda3
sgdisk -t 3:fd00 /dev/sda
mdadm --add /dev/md1 /dev/sda3
cat /proc/mdstat


 


 

GUNCELLEME 23 MAYIS 2014

Bu is cok Kabak Tadi verdi

Ama ne kadar ugrastigimi ben biliyorum 🙂

Bildigim seyi o yuzden yeni yine yeniden bir daha yazayim

bu kez gene 8 disk ile .bash_history dosyam uzerinden gidecegim

Yukaridaki hersey burada var kisa minik aciklamalar ile

Bir iki puf noktasida var

Bunu goz onune almak son olarak ve ileride uygulamak yerinde bir karar olacaktir.

Yazmamaya karar verdim.

Cok daraltti cunku beni

bir daha ugrasip bir daha yaparim sonra…

 

Howto build php 5.3.x (cgi) 5.2.x (cli)

This is the config I ended up with (cloudlinux option is optional by the way):

installation

Code:
cd /usr/local/directadmin/custombuild
./build set custombuild 1.2
./build update
./build set autover no
cp -Rp configure custom
cp -pf configure/suphp/configure.php5 custom/suphp/configure.php6
perl -pi -e 's/php5:/phprep:/' versions.txt
perl -pi -e 's/php6/php5/' versions.txt
perl -pi -e 's/phprep/php6/' versions.txt
./build set cloudlinux yes
./build set php5_ver 5.3
./build set php6_cgi no
./build set php6_cli yes
./build set php5_cgi yes
./build set php5_cli no
./build php n

After the build script finishes, it tries to restart apache, but can’t because libphp6.so cannot be found, this is likely because the build script has libphp6.so hardcoded somewhere and because we’re using that to cheat our way through this procedure, we can use sed to fix it;

Code:
sed -i 's/php6/php5/g' /etc/httpd/conf/extra/httpd-phpmodules.conf
service httpd restart

switching using a .htaccess
Switching from the default can now be done with a .htaccess in a users’ public_html dir.

Code:
<FilesMatch "\.(inc|php|php3|php4|php5|php6|phtml|phps)$">
SetHandler application/x-httpd-php
</FilesMatch>

ioncube loader
If you also want to add ioncube support to the 5.2 module, you need a workaround in order to be able to build ioncube as well.

Code:
./build set php6_cli no && ./build ioncube && ./build php6_cli yes
ionCube loader has been installed.
cp /usr/local/directadmin/custombuild/ioncube/ioncube_loader_lin_5.2.so /usr/local/lib/
echo "zend_extension=/usr/local/lib/ioncube_loader_lin_5.2.so" >> /usr/local/lib/php.ini

using pecl
Setting up pecl is easy too, just need to point it to the right config file:

Code:
/usr/local/bin/pecl config-set php_ini /usr/local/lib/php.ini
/usr/local/bin/pear config-set php_ini /usr/local/lib/php.ini
/usr/local/php5/bin/pear config-set php_ini /usr/local/etc/php5/cgi/php.ini
/usr/local/php5/bin/pecl config-set php_ini /usr/local/etc/php5/cgi/php.ini

Then you can use either pecl to install modules like apc, imagemagick, etc.

final result

Code:
/usr/local/bin/php -v
PHP 5.2.17 (cli) (built: Sep  4 2012 16:43:01)
Copyright (c) 1997-2010 The PHP Group
Zend Engine v2.2.0, Copyright (c) 1998-2010 Zend Technologies
    with the ionCube PHP Loader v4.2.2, Copyright (c) 2002-2012, by ionCube Ltd.

/usr/local/php5/bin/php -v
PHP 5.3.16 (cli) (built: Sep  4 2012 16:46:05)
Copyright (c) 1997-2012 The PHP Group
Zend Engine v2.3.0, Copyright (c) 1998-2012 Zend Technologies
    with the ionCube PHP Loader v4.2.2, Copyright (c) 2002-2012, by ionCube Ltd.

directadmin ic backup mekanizmalarini kullanmadan rsync ile directadmin sunucu migrasyonu nasil nail yapilir.

orjinal link: http://www.techtrunch.com/linux/migrate-directadmin-server-directadmin-server

rsync kodumuz

rsync -avz --stats --progress --delete -e ssh /var/lib/mysql/ XX.XXX.XX.XXX:/var/lib/mysql
rsync -avz --stats --progress --delete -e ssh /home/ XX.XXX.XX.XXX:/home
rsync -avz --stats --progress -e ssh /etc/passwd XX.XXX.XX.XXX:/etc
rsync -avz --stats --progress -e ssh /etc/shadow XX.XXX.XX.XXX:/etc
rsync -avz --stats --progress -e ssh /etc/group XX.XXX.XX.XXX:/etc
rsync -avz --stats --progress -e ssh /etc/exim.conf XX.XXX.XX.XXX:/etc
rsync -avz --stats --progress -e ssh /etc/exim.pl XX.XXX.XX.XXX:/etc
rsync -avz --stats --progress -e ssh /etc/system_filter.exim XX.XXX.XX.XXX:/etc
rsync -avz --stats --progress -e ssh /etc/exim.crt XX.XXX.XX.XXX:/etc
rsync -avz --stats --progress -e ssh /etc/exim.key XX.XXX.XX.XXX:/etc
rsync -avz --stats --progress -e ssh /etc/proftpd.conf XX.XXX.XX.XXX:/etc
rsync -avz --stats --progress -e ssh /etc/proftpd.vhosts.conf XX.XXX.XX.XXX:/etc
rsync -avz --stats --progress -e ssh /etc/proftpd.passwd XX.XXX.XX.XXX:/etc
rsync -avz --stats --progress -e ssh /etc/named.conf XX.XXX.XX.XXX:/etc
rsync -avz --stats --progress -e ssh /root/.my.cnf XX.XXX.XX.XXX:/root
rsync -avz --stats --progress --delete -e ssh /etc/virtual/ XX.XXX.XX.XXX:/etc/virtual
rsync -avz --stats --progress --delete -e ssh /etc/httpd/conf/ XX.XXX.XX.XXX:/etc/httpd/conf
rsync -avz --stats --progress --delete -e ssh /var/named/ XX.XXX.XX.XXX:/var/named
rsync -avz --stats --progress --delete -e ssh /var/spool/virtual/ XX.XXX.XX.XXX:/var/spool/virtual
rsync -avz --stats --progress --delete -e ssh /var/spool/mail/ XX.XXX.XX.XXX:/var/spool/mail
rsync -avz --stats --progress --delete -e ssh /var/spool/cron/ XX.XXX.XX.XXX:/var/spool/cron
rsync -avz --stats --progress --delete -e ssh /var/www/ XX.XXX.XX.XXX:/var/www
rsync -avz --stats --progress --delete -e ssh /var/log/ XX.XXX.XX.XXX:/var/log
rsync -avz --stats --exclude 'custombuild* --progress --delete -e ssh /usr/local/directadmin/ XX.XXX.XX.XXX:/usr/local/directadmin

directadmin ek dosya pathleri icin buraya bakabilirsin http://directadmin.com/paths.html

konfigurasyon dosyalari
elle tasimak faydali olabilir
gerci yukaridaki rsync icinde bunlarda gidiyor


/etc/httpd/conf/httpd.conf
/etc/httpd/conf/extra/httpd-vhosts.conf
/etc/httpd/conf/ips.conf
/etc/proftpd.conf
/etc/proftpd.vhosts.conf
/usr/local/directadmin/scripts/setup.txt
/usr/local/directadmin/data/admin/ip.list
/usr/local/directadmin/data/admin/show_all_users.cache
/usr/local/directadmin/data/users/*/user.conf
/usr/local/directadmin/data/users/*/httpd.conf
/usr/local/directadmin/data/users/*/user_ip.list
/usr/local/directadmin/data/users/*/domains/*.conf
/usr/local/directadmin/data/users/*/domains/*.ftp
/usr/local/directadmin/data/users/*/domains/*.ip_list
/var/named/*.db

Opteron 8 core cpu
adaptec 6805e 256MB
8xseagate 7200 RPM disks RAID 10
No BBU but write/read caches active
256K stripe size.
64GB ram

pveperf
CPU BOGOMIPS: 32002.08
REGEX/SECOND: 856289
HD SIZE: 19.69 GB (/dev/mapper/pve-root)
BUFFERED READS: 552.91 MB/sec
AVERAGE SEEK TIME: 6.43 ms
FSYNCS/SECOND: 2412.62

server is completely idle at the moment

some other tests:

dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync; unlink test
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 3.79119 s, 283 MB/s

dd if=/dev/zero of=test bs=1024k count=16k conv=fdatasync; unlink test
16384+0 records in
16384+0 records out
17179869184 bytes (17 GB) copied, 42.5562 s, 404 MB/s

ioping -c10 .
4096 bytes from . (ext3 /dev/mapper/pve-root): request=1 time=0.1 ms
4096 bytes from . (ext3 /dev/mapper/pve-root): request=2 time=0.2 ms
4096 bytes from . (ext3 /dev/mapper/pve-root): request=3 time=0.2 ms
4096 bytes from . (ext3 /dev/mapper/pve-root): request=4 time=0.2 ms
4096 bytes from . (ext3 /dev/mapper/pve-root): request=5 time=0.2 ms
4096 bytes from . (ext3 /dev/mapper/pve-root): request=6 time=0.2 ms
4096 bytes from . (ext3 /dev/mapper/pve-root): request=7 time=0.2 ms
4096 bytes from . (ext3 /dev/mapper/pve-root): request=8 time=0.2 ms
4096 bytes from . (ext3 /dev/mapper/pve-root): request=9 time=0.2 ms
4096 bytes from . (ext3 /dev/mapper/pve-root): request=10 time=0.2 ms

— . (ext3 /dev/mapper/pve-root) ioping statistics —
10 requests completed in 9002.8 ms, 5470 iops, 21.4 mb/s
min/avg/max/mdev = 0.1/0.2/0.2/0.0 ms

ioping -RD .

— . (ext3 /dev/mapper/pve-root) ioping statistics —
13897 requests completed in 3000.1 ms, 6205 iops, 24.2 mb/s
min/avg/max/mdev = 0.1/0.2/24.7/0.5 ms

ioping -R .

— . (ext3 /dev/mapper/pve-root) ioping statistics —
9679 requests completed in 3030.0 ms, 3897 iops, 15.2 mb/s
min/avg/max/mdev = 0.0/0.3/390.7/4.6 ms