Fulltext results:
- Crypto @linux:filesystems
- # write random data on the partition cryptsetup -c aes-xts-plain64 ... un **badblocks** check or **dd** to overwrite all data with random bit patterns badblocks -c 10240 -s... -l" (check vgdisplay): lvcreate -l 476931 -n lvdata cryptvg Percentage or M / G are also possible: lvcreate -l 60%VG -n lvdata cryptvg lvcreate -l 50000G -n lvdata cryptvg
- btrfs @linux:filesystems
- check: lsattr /var/lib/ balance reorders metadata - use after a lot data has changed or disc was full: fast: btrfs fi balance start -dusage=5 / good:... md raid or in raid mode, you can disable the metadata duplication for better performance btrfs bal s... he allocated space, not the space. Used shows the data-usage space. you need to add metadata usage. b
- Crash Recovery
- tratoserver.net/ Wait for 15-20min!! until "ServerData" Serverstatus shows: Installation finished Strat... ot@IP # recovery root passwd is shown in "Serverdata" mount /dev/md127 /mnt/ # system partitio... t boot mode to normal and wait 10min until "ServerData" Serverstatus shows: Installation finished =====... nt -t sysfs none /mnt/sys Now you can take your data back already. To repair what is broken, you prob
- Replication @linux:databases:mysql
- tion: If we know that only one node is performing data modifications we can avoid many possible problems... "slave" could be easily promoted to a new master. Data modifications are automatically replicated to fai... dates=1 ==== Activate replication ==== Import data e.g. from a innobackup on the slave - ideally it is the same data. at least it should not have much lag, so export
- PHP @linux:webserver
- owned by the same user as the sftp user - not www-data. **Pool per domain config:** cd /etc/php5/fpm... adduser --disabled-login USERNAME adduser www-data USERNAME mkdir /var/www/USERNAME chown -R USE
- Bootloader @linux:filesystems
- em, you can mount the partitions and transfer the data anyway you like (as root user): 1. **rsync** needs params and data must be excluded, (but works f&f in a running sys
- Streaming -Server und Cam -Software
- l gabekangas/owncast:latest docker run -v `pwd`/data:/app/data -p 8080:8080 -p 1935:1935 -it gabekangas/owncast:latest \\ ===== cvlc Webcam to Stream
- Nginx @linux:webserver
- :** mkdir /var/cache/ngx_pagespeed/ chown www-data:www-data /var/cache/ngx_pagespeed/ **edit /etc/nginx/sites-available/default:** server { #....
- Shell Commands & Oneliner
- tar cvpzf /mnt/somewhere/backup.tgz --exclude=/data --exclude=/proc --exclude=/lost+found --exclude=/... dump of complete filesystems ===== **backup raw data of a disc (as root): ** dd if=/dev/sdX | gzip
- Debian Server Install
- ps dselect aptitude iproute2 \ tcpdump rcconf tzdata traceroute tar less lftp locales ntpdate fail2ban... from somewhere (at least new packages) * rsync data * munin * monit * fail2ban * **install ne
- mySQL @linux:databases
- l give you an exact point-in-time snapshot of the data --single-transaction produces a checkpoint that allows the dump to capture all data prior to the checkpoint while receiving incoming
- vsFTPd @linux:network
- ssl_enable=YES allow_anon_ssl=NO force_local_data_ssl=NO force_local_logins_ssl=NO ssl_tlsv1=YE... pasv_min_port=55000 pasv_max_port=60000 ftp_data_port=20 listen_port=21 ===== Client testing =
- LXC @linux:virtualization
- er -a amd64 ==== Clone container ==== Copy all data: lxc-clone --backingstore btrfs --orig vs1 --ne
- dirvish @linux:backup
- 100GB of small mail files. Usually we mirror the data from a remote server and backup via dirvish from
- System Config
- ==== set timezone ==== dpkg-reconfigure tzdata ==== fix locales ==== apt-get install local