Original link: http://lathama.net/Migrating_large_sparse_files_over_the_network

Migrating large sparse files over the network

Intro

When you need to move large sparse files across the network there are many issues related to support of this new FS method. Sparse files are files that say they are size X but only allocate blocks on the file system that are actually used. This is a great use of space and very nice for virtualization. In the past methods like COW to only use space as it was needed. These solutions worked. Sparse file support was integrated into the Linux Kernel and now it is the preferred way to handle images.

Problem

The need to move a large 100GB+ file from one server to another over the network. The file is sparse in nature which means that only a small portion of the file may actually be used. One does not want to transfer every byte of data and to fully allocate the file on the target system.

Solution

Use Tar with its support for Sparse and stdin and stdout. Tar checks the source file twice (normally and a second time for sparse) before streaming. On large files this can take time and processing power. The target file will be checked as it is written.

Requirements

Pipe Viewer will show us what is happening in the pipe. Without this you may go insane.

serverA:/# aptitude install pv
serverB:/# aptitude install pv

First you need to understand that Tar is going to look at the file TWICE. This will take lots of time and make you think nothing is happening. Wait, Wait, Wait and then smile. Select a port under 45,000 and above 1024 that is not in use by another service.

Example*

serverA:/# tar -cS IMG.img | pv -b | nc -n -q 15 172.20.2.3 5555
serverB:/# nc -n -l 5555 | pv -b | tar -xS

As an example here is another method. As with all SSH connections, it will cause 99% + CPU load for the duration of the connection even with compression off.

tar -cS IMG.img | pv -b | ssh -o 'Compression no' root@172.20.2.3 "cat > IMG.img.tar"

Then you need to extract the TAR image.

tar -xSf IMG.img.tar

Summary

There are other methods of completing this action. This method is the fastest that I have found. Using Rsync with Sparse options does work but it trasfers every null byte over the network, so it takes more time. It also runs two checksums on both source and target files. Further testing shows that compression can cause issues if one or both the servers are under load. This method can also be used over SSH or other authenticated protocols.

* This method has only hung once for me. If it causes you issues, wait for the connection to time out or test with another image.

Here is how you get vzdump on a clean version of CentOS (via the hostnode):

rpm -ivh "ftp://ftp.pbone.net/mirror/ftp.freshrpms.net/pub/freshrpms/pub/dag/redhat/el5/en/x86_64/RPMS.dag/cstream-2.7.4-3.el5.rf.x86_64.rpm"
wget http://dag.wieers.com/rpm/packages/perl-LockFile-Simple/perl-LockFile-Simple-0.206-1.el5.rf.noarch.rpm
rpm -ivh perl-LockFile-Simple-0.206-1.el5.rf.noarch.rpm
/bin/rm perl-LockFile-Simple-0.206-1.el5.rf.noarch.rpm
rpm -ivh "http://chrisschuld.com/centos54/vzdump-1.2-6.noarch.rpm"

Since version 1.2-6 of vzdump the location of the modules is not “automatic” and have found it necessary to export the location of the PVE libraries that vzdump requires via this command:

export PERL5LIB=/usr/share/perl5/

DONE! 🙂

vzdump , vzrestore …

———-

ONEMLI NOT:
YUKARIDAKILER CENTOS5 ICIN CALISIYOR
CENTOS 6.2 DE YAPTIGIMDA CORBA OLDU HERSEY
SU SEKILDE YAPILMASI GEREK CENTOS 6.2 DE

cd /tmp
wget http://pkgs.repoforge.org/cstream/cstream-2.7.4-3.el6.rf.i686.rpm
wget http://pkgs.repoforge.org/perl-LockFile-Simple/perl-LockFile-Simple-0.207-1.el6.rf.noarch.rpm
rpm -ivh cstream-2.7.4-3.el6.rf.i686.rpm
rpm -ivh perl-LockFile-Simple-0.207-1.el6.rf.noarch.rpm

rpm -ivh http://download.openvz.org/contrib/utils/vzdump/vzdump-1.2-4.noarch.rpm


aptitude install pure-ftpd
nano /etc/pure-ftpd.conf
---------
ChrootEveryone yes
BrokenClientsCompatibility no
MaxClientsNumber 10
Daemonize yes
MaxClientsPerIP 5
VerboseLog no
DisplayDotFiles no
AnonymousOnly no
NoAnonymous yes
SyslogFacility ftp
DontResolve yes
MaxIdleTime 15
PureDB /etc/pureftpd.pdb
LimitRecursion 2000 8
AnonymousCanCreateDirs no
MaxLoad 4
UserRatio 5 10
AntiWarez no
UserBandwidth 200
Umask 133:022
MinUID 100
AllowUserFXP no
AllowAnonymousFXP no
ProhibitDotFilesWrite yes
ProhibitDotFilesRead yes
AutoRename no
AnonymousCantUpload yes
AltLog stats:/var/log/pureftpd.log
NoChmod yes
CreateHomeDir yes
Quota 2000:500
MaxDiskUsage 80
CustomerProof yes
PerUserLimits 3:20
IPV4Only yes
------------
nano /etc/default/pure-ftpd-common >>
STANDALONE_OR_INETD=standalone ,
VIRTUALCHROOT=true

nano /etc/pure-ftpd/conf/PureDB >> /etc/pure-ftpd/pureftpd.pdb

cd /etc/pure-ftpd/auth

ln -s /etc/pure-ftpd/conf/PureDB 50pure

sudo groupadd ftpgroup

sudo useradd -g ftpgroup -d /dev/null -s /etc ftpuser

Create our first virtual user

pure-pw useradd joe -u ftpuser -g ftpgroup -d /home/pubftp/joe

We will have to type his password twice, and we are almost ready to go.

Save the password file, I mean create the pure-ftp password database run this command:

pure-pw mkdb

Do this each time you make changes to the password file.

/etc/init.d/pure-ftpd start

Some other tips

To list users

pure-pw list
To see some user's information

pure-pw show joe
Where joe is the user you want to list his info.
To change a password

pure-pw passwd joe
Be sure to update the database by running:

pure-pw mkdb

Free Public DNS Servers

=> Service provider: Google
Google public dns server IP address:
8.8.8.8
8.8.4.4

=> Service provider:Dnsadvantage
Dnsadvantage free dns server list:
156.154.70.1
156.154.71.1

=> Service provider:OpenDNS
OpenDNS free dns server list / IP address:
208.67.222.222
208.67.220.220

=> Service provider:Norton
Norton free dns server list / IP address:
198.153.192.1
198.153.194.1

=> Service provider: GTEI DNS (now Verizon)
Public Name server IP address:
4.2.2.1
4.2.2.2
4.2.2.3
4.2.2.4
4.2.2.5
4.2.2.6

=> Service provider: ScrubIt
Public dns server address:
67.138.54.100
207.225.209.66

If you want to access an iLO behind a firewall, there are some TCP ports that need to be opened on the firewall to allow all iLO traffic to flow through. Here is a list of the default ports used by iLO, but these can be modified on iLO’s Administration… Access… Services… tab.

ILO FUNCTION SOCKET TYPE PORT NUMBER
———————- ———– ———–
Secure Shell (SSH) TCP 22
Remote Console/Telnet TCP 23
Web Server Non-SSL TCP 80
Web Server SSL TCP 443
Terminal Services TCP 3389
Virtual Media TCP 17988
Shared Remote Console TCP 9300
Console Replay TCP 17990
Raw Serial Data TCP 3002


perl -MCPAN -eshell

install Term::ReadKey
install DBD::mysql

exit

Quick Linux Tip:

If you’re trying to delete a very large number of files at one time (I deleted a directory with 485,000+ today), you will probably run into this error:

/bin/rm: Argument list too long.

The problem is that when you type something like “rm -rf *”, the “*” is replaced with a list of every matching file, like “rm -rf file1 file2 file3 file4″ and so on. There is a reletively small buffer of memory allocated to storing this list of arguments and if it is filled up, the shell will not execute the program.

To get around this problem, a lot of people will use the find command to find every file and pass them one-by-one to the “rm” command like this:

find . -type f -exec rm -v {} \;

My problem is that I needed to delete 500,000 files and it was taking way too long.

I stumbled upon a much faster way of deleting files – the “find” command has a “-delete” flag built right in! Here’s what I ended up using:

find . -type f -delete

Using this method, I was deleting files at a rate of about 2000 files/second – much faster!

You can also show the filenames as you’re deleting them:

find . -type d -print -delete

…or even show how many files will be deleted, then time how long it takes to delete them:

root@devel# ls -1 | wc -l && time find . -type f -delete
real    0m3.660s
user    0m0.036s
sys     0m0.552s

Options +FollowSymlinks
RewriteEngine on
rewritecond %{http_host} ^domain.com [nc]
rewriterule ^(.*)$ http://www.domain.com/$1 [r=301,nc]

bu ornekte

siteye domain.com olarak girenleri www.domain.com a yonlendirdik

root@yedek:~# cat /etc/debian_version
6.0.3

debian kurduk standart br sekilde lvm ile
daha sonra lvm mizi yeni diskler ekleyerek buyuttuk
legacy grub lvmyi sikine takmasada kullandigim grub2 default gelen yeni debian ile yeni kernel guncellemesinde sicti.
neden
cunku grub2 lvm de ne oluyor bitiyor onemsiyor.
benim icin acil cozum gerekli idi
su sekilde yaptim

1- aptitude purge grub
2- aptitude purge grub-common (30 tane sey sorcak he de gec)
3- aptitude install grub-pc (grub2 paketi bu olsa gerek)
bunlari yapamadi gene hata verdi
cunku disk uid leri ile ilgili bir salaklik vardi
o zaman su komutu calistirdim

#1 | Written by drdrape about 4 months ago.

You can also run
sudo grub-mkdevicemap
which will update /boot/grub/device.map automatically

device mapi guncelledi kendisi
sonra oldu bitti

detayli hata logu ve basima gelende su posttakine benzer bir durumdu.


While installing security updates in a seldomly used virtual machine, the latest kernel package was ready to be configured when I got the following error:

Setting up linux-image-2.6.32-5-amd64 (2.6.32-31) ...
Running depmod.
Running update-initramfs.
update-initramfs: Generating /boot/initrd.img-2.6.32-5-amd64
Examining /etc/kernel/postinst.d.
run-parts: executing /etc/kernel/postinst.d/initramfs-tools 2.6.32-5-amd64 /boot/vmlinuz-2.6.32-5-amd64
run-parts: executing /etc/kernel/postinst.d/zz-update-grub 2.6.32-5-amd64 /boot/vmlinuz-2.6.32-5-amd64
Generating grub.cfg ...
/usr/sbin/grub-probe: error: Couldn't find PV pv1. Check your device.map.
run-parts: /etc/kernel/postinst.d/zz-update-grub exited with return code 1
Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/linux-image-2.6.32-5-amd64.postinst line 799.
dpkg: error processing linux-image-2.6.32-5-amd64 (--configure):
subprocess installed post-installation script returned error exit status 2
Errors were encountered while processing:
linux-image-2.6.32-5-amd64

First I didn't quite get the line about "Couldn't find PV pv1. Check your device.map", but after some time it dawned on me that "PV" might mean "physical volume", a term used by LVM. I also remembered that I extended the LVM volume group with an additional block device that I attached to the virtual machine.