1- sistem geri yuklemeyi kapat

2- sanal bellek kapat > bilgisayar > ozellikler > gelismis sistem ayarlari > gelismis sekmesi > performans > ayarlar > sanal bellek

3- trim kontrol

cmd > fsutil behavior query disabledeletenotify

DisableDeleteNotify = 0 TRIM Aktif demektir.
DisableDeleteNotify = 1 TRIM Kapalı demektir.

4- ssdye ozellikler > indexing servisi kapat

 

original post: http://www.aip.im/2010/08/monitoring-proxmox-openvz-containers-bandwidth/

Monitoring Proxmox OpenVZ container’s bandwidth

Use these scripts to collect bandwidth data for each container. You can view the data on a graph and total usage by months.

The scripts are originally received from Hutzoft but modified to work with the Proxmox directory structure.

1. Install rrdtool and PHP support

apt-get install rrdtool php5

2. Download the bandwidth collection script and Web UI

wget http://www.aip.im/downloads/vzmonitor.tar.gz
or
wget http://www.shukko.com/vzmonitor.tar.gz

3. Unpack and relocate

tar zxvf vzmonitor.tar.gz
mkdir /usr/local/bandwidth
mv bandwidth.sh /usr/local/bandwidth/
chmod +x /usr/local/bandwidth/bandwidth.sh
mv vzmonitor /var/www/

4. Create a cron job to collect the data every 5 minutes (crontab -e)

*/5 * * * * cd /usr/local/bandwidth;./bandwidth.sh &> /dev/null

5. Add config to Apache (pico /etc/apache2/conf.d/vzmonitor.conf)

Alias /vzmonitor /var/www/vzmonitor

<Directory /var/www/vzmonitor>
DirectoryIndex index.php
</Directory>

6. Restart Apache

/etc/init.d/apache2 restart

7. It’s ready, wait a few minutes and enter this location to view the bandwidth usage: http://yourserver/vzmonitor

http://monalisa.cern.ch/FDT/

FDT is an Application for Efficient Data Transfers which is capable of reading and writing at disk speed over wide area networks (with standard TCP). It is written in Java, runs an all major platforms and it is easy to use.

FDT is based on an asynchronous, flexible multithreaded system and is using the capabilities of the Java NIO libraries. Its main features are:

  • Streams a dataset (list of files) continuously, using a managed pool of buffers through one or more TCP sockets.
  • Uses independent threads to read and write on each physical device
  • Transfers data in parallel on multiple TCP streams, when necessary
  • Uses appropriate-sized buffers for disk I/O and for the network
  • Restores the files from buffers asynchronously
  • Resumes a file transfer session without loss, when needed

FDT can be used to stream a large set of files across the network, so that a large dataset composed of thousands of files can be sent or received at full speed, without the network transfer restarting between files.

 

yum install setuptool
yum install system-config-network*
yum install system-config-securitylevel-tui
yum install ntsysv

yum install setuptool system-config-network* system-config-securitylevel-tui ntsysv

Original: http://www.buildingcubes.com/2012/07/25/installing-my-3tb-hard-drive-on-debian-linux-step-by-step/

You can format it EXT4, but ext2 and ext3 are also OK ! ext2 and ext3 allow up to 16TB disks, and file sizes of up to 2TB, ext4 allows much more.

Any linux kernel newer than 2.6.31 should work just fine with “Advanced format” drives using the exact same steps in this article.

MBR only supports 2TB drives, you need GPT, so let us get started

1- apt-get update
2- apt get install parted
3- parted /dev/sdc
4- mklabel gpt
5- Answer yes to: Warning: The existing disk label on /dev/sdb will be destroyed and all data on this disk will be lost. Do you want to continue?
Yes/No? yes
6- mkpart primary ext4 0% 100% (to make a partition as big as the disk (will occupy starting from first megabyte (for alignment) to the end of disk))
7- quit

Now to formating the drive

mkfs.ext4 /dev/sdc1

And there we are, Now we need to mount it at boot time by adding it to fstab, to do that, we will need the disk’s unique ID !

8- Now executing the following command will give you the unique ID of this new partition for use with fstab (The disk list we will edit below in step 10)
blkid /dev/sdc1
9- create the directory where you want to mount your hard disk, for example
mkdir /hds
mkdir /hds/3tb
10- Now, we add the following line to fstab

UUID=b7a491b1-a690-468f-882f-fbb4ac0a3b53       /hds/3tb            ext2     defaults,noatime                0       1

11- Now execute
mount -a

You are done,. if you execute
df -h
You should see your 2+TB hard drive in there !

orjinal link: http://www.debian-administration.org/articles/390

One common server bottleneck is DNS lookups. Many common server tasks such as from looking up hostnames to write Apache logfiles and processing incoming mail require the use of DNS queries. If you’re running a high-traffic system it might be useful to cache previous lookups.

There are several different packages you can use for caching DNS requests – including bind, djbdns, dnsmasq and pdnsd.

The pdnsd package is a very simple and lightweight tool for DNS caching. It will, like many of the other systems, act as a small DNS server forwarding requests to a “real” DNS server and caching the responses.

When pdnsd is stopped it will save all the lookups which have been made against it so they may be reloaded when it starts again.

Installation is very straightforward:

apt-get install pdnsd

Once installed the software is configured via the file /etc/pdnsd.conf.

To configure the software you must do two things:

  • Configure pdnsd so that it will forward requests it doesn’t know about to a real DNS server, letting it cache those results.
  • Update your system so that DNS lookups against the newly installed cache, or proxy.

Once you’ve completed these two steps all DNS lookups upon your system will be cached, and your DNS lookups should be much faster.

Upon your Debian GNU/Linux system you configure the DNS server(s) which are being used by means of the file, /etc/resolv.conf, this file will contain a list of name servers to query, perhaps along with a search domain to be used for unqualified hosts.

To tell your server to make DNS queries against the freshly installed server you would update that file to read:

nameserver 127.0.0.1

The next thing to do is to edit the pdnsd configuration file /etc/pdnsd.conf to specify which DNS servers the cache should use for its own lookups – these will most likely be your ISPs nameservers.

Locate the section of the configuration file which starts with server and add the IP address:

#
#  Specify the IP address of the real DNS server to query against here:
#
server {
        ip=11.22.33.44;   
        timeout=30;
        interval=30;
        uptest=ping;
        ping_timeout=50;
        purge_cache=off;
}

With this setting updated you can restart the caching service:

root@itchy:/etc# /etc/init.d/pdnsd restart
Restarting proxy DNS server: pdnsd.
root@itchy:/etc#

If you wish to add more DNS servers to query against you can add them seperated by commas, or you can add multiple ip= lines such as these two examples:

       # Several IPs seperated by commas.
       ip=11.22.33.44,111.222.333.444;

       # Easier to read - one per line:
       ip=11.22.33.44;
       ip=111.222.333.444;

For more details of the supported options please consult the documentation by running “man pdnsd.conf“.

You can test the cache is working by issuing a manual request to it:

root@itchy:/etc# dig  @localhost example.com mx

;; QUESTION SECTION:
;example.com.                   IN      MX

;; AUTHORITY SECTION:
example.com.            86400   IN      SOA     dns1.icann.org. hostmaster.icann.org.

;; Query time: 2224 msec
;; SERVER: 192.168.1.50#53(192.168.1.50)
;; WHEN: Sun Apr 23 21:47:41 2006
;; MSG SIZE  rcvd: 90

Here we used the dig command (part of the dnsutils package) to lookup the MX record of the domain name example.com. Notice at the bottom it shows “Query time: 2224msec”? Lets run that same query again – if our cache is working correctly it should be significantly faster:

root@itchy:/etc# dig  @itchy example.com mx |grep time
;; Query time: 1 msec

Much faster 🙂

(Yes DNS queries are ordinarily cached to a certain extent; so you’d expect the speedup even without our explicit DNS caching server…)

bir tane dizinim var
icinde 400.000 tane minik dosyam var
180 gunden eski olanlarini silmek istiyorum
argument list too long diyor klasik yontemler…
komutum nedir ?
Budur:

find . -type f -print -mtime +180 -delete


#!/bin/bash
for i in $( ls /var/cpanel/users ); do
/scripts/pkgacct $i /backup/cpbackup/daily backup nocompress
done

mainboard: X8DTL-i
CPU:Intel(R) Xeon(R) CPU E5606 @ 2.13GHz
BIOS: X8DTL31.C30 – BIOS Revision: R 2.1a
OS: Centos 6.3 Latest
Kernel: 2.6.32-279.5.2.el6.x86_64
07:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection
e1000e driver: latest from elrepo kmod-e1000e
modinfo e1000e
filename: /lib/modules/2.6.32-279.5.2.el6.x86_64/weak-updates/e1000e/e1000e.ko
version: 2.0.0-NAPI
license: GPL
description: Intel(R) PRO/1000 Network Driver
author: Intel Corporation, srcversion: BBDF1C9420EE194E4015419

with this new BIOS, there is an option in BIOS setup screen to completely disable ASPM

after this upgrade and adding below line to grub.conf

pcie_aspm=off e1000e.IntMode=1,1 e1000e.InterruptThrottleRate=10000,10000 acpi=off

My server is still online without any problems for the last 48 hours.

Before that eth0 and eth1 was crashing at random intervals.
The longest uptime before crashing was 23 hours.

it looks like problem solved for me with this upgrade…

taken from: http://www.doxer.org/learn-linux/resolved-intel-e1000e-driver-bug-on-82574l-ethernet-controller-causing-network-blipping/

Earlier I posted a question about centos 6.2 lost internet connections intermittently. Now finally I got the right way to fix this.

Firstly, this is a known bug on Intel e1000e driver on linux platforms. This is a driver problem with the Intel 82574L(MSI/MSI-X interrupts issue). The internet connection lost itself now and then and there’s nothing logged about this which is very bad for troubleshooting.
You can see more bug reporting about this at https://bugzilla.redhat.com/show_bug.cgi?id=632650

Fortunately, we can resolve this by install kmod-e1000e package from ELrepo.org. To solve this, you need do as the following(ignore lines with strikeouts):

Install kmod-e1000e offered by Elrepo

Import the public key:
rpm –import http://elrepo.org/RPM-GPG-KEY-elrepo.org

To install ELRepo for RHEL-5, SL-5 or CentOS-5:
rpm -Uvh http://elrepo.org/elrepo-release-5-3.el5.elrepo.noarch.rpm

To install ELRepo for RHEL-6, SL-6 or CentOS-6:
rpm -Uvh http://elrepo.org/elrepo-release-6-4.el6.elrepo.noarch.rpm

Before installing the new driver, let’s see our old one:
[root@doxer sites]# lspci |grep -i ethernet
02:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection
03:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection

[root@doxer modprobe.d]# lsmod|grep e100
e1000e 219500 0

[root@doxer modprobe.d]# modinfo e1000e
filename: /lib/modules/2.6.32-220.7.1.el6.x86_64/kernel/drivers/net/e1000e/e1000e.ko
version: 1.4.4-k
license: GPL
description: Intel(R) PRO/1000 Network Driver
author: Intel Corporation, srcversion: 6BD7BCA22E0864D9C8B756A

Now let’s install the new kmod-e1000e offered by elrepo:
[root@doxer yum.repos.d]# yum list|grep -i e1000
kmod-e1000.x86_64 8.0.35-1.el6.elrepo elrepo
kmod-e1000e.x86_64 1.9.5-1.el6.elrepo elrepo

[root@doxer yum.repos.d]# yum -y install kmod-e1000e.x86_64

After installation, reboot your machine, and you’ll find driver updated:
[root@doxer ~]# modinfo e1000e
filename: /lib/modules/2.6.32-220.7.1.el6.x86_64/weak-updates/e1000e/e1000e.ko
version: 1.9.5-NAPI
license: GPL
description: Intel(R) PRO/1000 Network Driver
author: Intel Corporation, srcversion: 16A9E37B9207620F5453F5E

[root@doxer ~]# lsmod|grep e100
e1000e 229197 0

change kernel parameter

Append the following parameters to grub.conf kernel line:

pcie_aspm=off e1000e.IntMode=1,1 e1000e.InterruptThrottleRate=10000,10000 acpi=off

change NIC parameters(you should add these lines to /etc/rc.local)

#disable pause autonegotiate
/sbin/ethtool -A eth0 autoneg off
/sbin/ethtool -s eth0 autoneg off
#change tx ring buffer
/sbin/ethtool -G eth0 tx 4096 #maybe too large(consider 512). To increase interrupt rate, ethtool -C eth0 rx-usecs 10<10000 interrupts per second>
#change rx ring buffer
/sbin/ethtool -G eth0 rx 128
#disable wake on line
/sbin/ethtool -s eth0 wol d
#turn off offload
/sbin/ethtool -K eth0 tx off rx off sg off tso off gso off gro off
#enable TX pause
/sbin/ethtool -A eth0 tx on
#disable ASPM
/sbin/setpci -s 02:00.0 CAP_EXP+10.b=40
/sbin/setpci -s 00:19.0 CAP_EXP+10.b=40

PS:

pcie_aspm is abbr for Active-State Power Management. This is somehow related to powersaving mechanism, you can get more info here.
acpi is abbr for Advanced Configuration and Power Interface, you can refer to here
apic is abbr for Advanced Programmable Interrupt Controller, it’s somehow related to IRQ. apic is one kind of many PICs, intel and some other NICs have this feature. You can read more info about this here.

Now reboot your machine and you’re expected to have a more steady networking!

PS2:

The reason why there’s so much strikeouts in this article is that I’ve struggled a lot with this kernel bug. Firstly, I thought it’s caused by kernel bug of e1000e driver, and after some searching, I installed kmod-e1000e driver and modified the kernel parameter. Things turned better for a short time. Later, I found the issue was still there, so I tried compile the latest e1000e driver from intel. But neither this worked.

Later, I tried a script which monitored the networking of the time NIC went down. After the NIC failed for several times, I found that Tx traffic was so high each time NIC went to failure(TX bytes went up like 5Gb at a very short time). Based on this, I realized that there may be some DoS attack on the server. Using ntop & tcpdump, I found that DNS traffic was very large, but actually my host was not providing DNS services at all!

Then I wrote some iptable rules to disallow DNS queries etc, and after that, the host now is becoming steady again! Traffic went down as per normal, and everything is now on the track. I’m so happy and so excited about this as this is the first time I’ve stopped an DoS attack!