Helpful commands for CMD, Powershell and Linux CLI as well tutorials for various tasks:
The storage layer of the traditional computer architecture which we know from the 1960th on has been delevoped over decades. Still it is the slowest layer of the whole hardware architecture pyramide (cpu, level caches, memory, bus types). Therefor it is still the bottleneck in most modern computer systems. In times of virtualization technology the demand on fast storage layers has been increased. In this article I would like to give you some insights into usage of different storage technologies and their impacts.
The requirements of a storage vary of the purpose the system shall fullfil its task. By answering the following questions you may get a direction of the storage type you require for your individual task:
When using benchmarking software you should take into account the following rules not to become unhappy at some point:
You may find a lot of benchmarking software on the market. But there is only a few I can suggest:
Below you will find some benchmarking results I collected during my tests with various system setups. I hope my comments help you for your own considerations.
Iometer is one of the most common benchmarking tools to test iops performance. There exist a VMware forum thread having specified four tests for VMware environments, see chapter 'Sources':
The Synology NAS RS2414+ has been installed to carry all first level backup data and working as vdisk cache to keep a maximum of five VMs running in case its operative storage needs to be maintained.
Test #1:
SERVER TYPE: Synology RS2414+
CPU TYPE: 2x Intel Atom D2700
OS VERSION: Synology DSM 6.1.2
STORAGE TYPE: WD RED 3TB, 8x hdd, 5400 rpm, S-ATA 3gbps,
RAID-5
FLASH CACHE TYPE:
Samsung 840 Evo 256 GB, 2x SSD, S-ATA 6gbps, RAID-0, Read Only
INTERFACE TYPE: iSCSI, 2x 1gbps, separate vlan, multipathing, round robin
SECTOR SIZE: 8K, VAAI
LUN SETTING: Advanced LUN, Thin Provisioning
TEST HOST TYPE: Windows 10
DURATION OF EACH TEST: 10 minutes
Test name | Latency | Avg iops | Avg MBps | cpu load |
---|---|---|---|---|
Max Throughput-100%Read | 15.56 | 3835 | 119 | 13% |
RealLife-60%Rand-65%Read | 157.46 | 374 | 2 | 8% |
Max Throughput-50%Read | 17.21 | 3135 | 97 | 5% |
Random-8k-70%Read | 84.22 | 682 | 5 | 8% |
According to the results [...]
Test #2:
SERVER TYPE: Synology RS2414+
CPU TYPE: 2x Intel Atom D2700
OS VERSION: Synology DSM 6.1.2
STORAGE TYPE: WD RED 3TB, 8x hdd, 5400 rpm, S-ATA 3gbps,
RAID-5
FLASH CACHE TYPE:
Samsung 840 Evo 256 GB, 2x SSD, S-ATA 6gbps, RAID-0, Read Only
INTERFACE TYPE: iSCSI, 2x 1gbps, separate vlan, multipathing, round robin
SECTOR SIZE: 8K, VAAI
LUN SETTING: Standard LUN, Eager Provisioning
TEST HOST TYPE: Windows 10
DURATION OF EACH TEST: 10 minutes
Test name | Latency | Avg iops | Avg MBps | cpu load |
---|---|---|---|---|
Max Throughput-100%Read | 15.48 | 3855 | 120 | 6% |
RealLife-60%Rand-65%Read | 135.45 | 434 | 3 | 7% |
Max Throughput-50%Read | 16.48 | 3303 | 103 | 5% |
Random-8k-70%Read | 74.03 | 768 | 6 | 9% |
...
...
On this page you will find some useful commands to troubleshoot ESOS. This list is not complete and may become larger in the future.
# dmesg
Watch content of files with a refresh rate of 2 seconds
# watch cat /sys/block/bcache0/bcache/dirty_data
# top
# iostat -x
# fdisk -l | less
# e2fsck
# fsck.xfs /dev/bcache0
# xfs_repair /dev/bcache0
Flush (dirty) cache of the caching device to the backing device:
# echo 0 > /sys/block/bcache0/bcache/writeback_percent
When cache is clean you may want to set writeback_percent back to its default value 10:
# echo 10 > /sys/block/bcache0/bcache/writeback_percent
Switch cache mode between writethrough and writeback:
# echo writethrough > /sys/block/bcache0/bcache/cache_mode
# echo writeback > /sys/block/bcache0/bcache/cache_mode
For SSD cache performance testing disable the default 4 MB sequential_cutoff by executing:
# echo 0 > /sys/block/bcache0/bcache/sequential_cutoff
After benchmarking set the value back to:
# echo 4M > /sys/block/bcache0/bcache/sequential_cutoff
On this page you may find some useful commands for different tasks in Windows Command Prompt (CMD), Powershell (PS), Windows Preinstallation Environment (Windows PE) and Command Line Interface (CLI). Most commands have been used on the following operating systems:
CMD, PS and Win PE with
CLI with the Linux distributions
C:\> shutdown /r /t 1
C:\> shutdown /s /t 1
C:\> shutdown /l /t 1
C:\> Netsh Advfirewall show allprofiles
C:\> NetSh Advfirewall set allprofiles state off
C:\> NetSh Advfirewall set allprofiles state on
Syntax: dfsutil root \\<Domain>\<Namespace>
Example: dfsutil root \\dev.beontec.eu\Sites
Syntax: dfsdiag /testdfsintegrity /dfsroot:\\<Domain>\<Namespace>
Example: dfsdiag /testdfsintegrity /dfsroot:\\dev.beontec.eu\Sites
You would like to see a result similar to this one:
Starting TestDfsIntegrity...
Validating the DFS metadata integrity of \\DEV\Sites...
Success: No known corruption is found in the DFS metadata.
Checking for DFS metadata consistency between domain controllers and the PDC emu
lator in the domain...
Success: DFS metadata is consistent across all accessible domain controllers and
the PDC emulator.
Checking the registry of the namespace servers...
Success: Registry information on namespace servers is consistent with the metada
ta in Active Directory Domain Services.
Validating reparse points of all DFS folders in namespace: \\DEV\Sites
Finished TestDfsIntegrity.
If you would like to see current files to be in sync (processing and scheduled) by DFS replication you can do this by opening PowerShell on a domain member with havinh DFS management tools (feature) installed and executing:
Syntax: .\dfsrdiag.exe ReplicationState /mem:<Hostname>
Example: .\dfsrdiag.exe ReplicationState /mem:SRV-FS2
Dfsrdiag SyncNow /partner:dfsserver2 /rgname:domainname\spacename\folder1 /member:dfsserver1 /time:5
Dfsrdiag SyncNow /partner:SRV-FS2 /rgname:mydomain.intern\corp\documents /member:SRV-FS1 /time:5
netsh int ipv6 isatap set state disabled (enabled)
netsh int ipv6 6to4 set state disabled (enabled)
netsh interface teredo set state disable (default)
PS > gdr -PSProvider 'FileSystem'
PS > Echo y | chkdsk /x /v c:
PS > shutdown /r /t 1
fsutil dirty set D:
Query whether the drives have been marked as dirty
fsutil dirty query D:
After CHKDSK has finished the system will reboot automatically. You may find the results in the Event Viewer > Windows Logs > Applications > Filter Current Log... and set Event Source to Wininit and <All Event IDs> to 1001.
The result will look similar to this one:
Checking file system on C:
The type of the file system is NTFS.
Volume label is System.
A disk check has been scheduled.
Windows will now check the disk.
Stage 1: Examining basic file system structure ...
Cleaning up instance tags for file 0x90e7.
326400 file records processed. File verification completed.
13005 large file records processed. 0 bad file records processed.
Stage 2: Examining file name linkage ...
430740 index entries processed. Index verification completed.
0 unindexed files scanned. 0 unindexed files recovered.
Stage 3: Examining security descriptors ...
Cleaning up 1577 unused index entries from index $SII of file 0x9.
Cleaning up 1577 unused index entries from index $SDH of file 0x9.
Cleaning up 1577 unused security descriptors.
Security descriptor verification completed.
52171 data files processed. CHKDSK is verifying Usn Journal...
Usn Journal verification completed.
CHKDSK discovered free space marked as allocated in the volume bitmap.
Windows has made corrections to the file system.
No further action is required.
78281727 KB total disk space.
70630948 KB in 257174 files.
169156 KB in 52172 indexes.
0 KB in bad sectors.
401675 KB in use by the system.
65536 KB occupied by the log file.
7079948 KB available on disk.
4096 bytes in each allocation unit.
19570431 total allocation units on disk.
1769987 allocation units available on disk.
Internal Info:
00 fb 04 00 6c b8 04 00 82 46 09 00 00 00 00 00 ....l....F......
a2 01 00 00 a4 00 00 00 00 00 00 00 00 00 00 00 ................
Windows has finished checking your disk.
Please wait while your computer restarts.
There are two options: installing directly from install medium or using a network path. Mount the Windows Server 2012 R2 installation medium (e.g. drive letter E:) and specify the source path manually.
Install-WindowsFeature Net-Framework-Core -source E:\sources\sxs
Copy the SxS folder (E:\sources\sxs) to a network location.
Install-WindowsFeature Net-Framework-Core -source \\network\share\sxs
Before you perform any maintenance tasks in Windows Preinstall Environment make sure your hard disks are attached to a storage controller or HBA where Windows has the drivers. Example: In case you run your VMware VMs with a SCSI controller of the type 'VMware Paravirtual' you need to switch them back to the type 'LSI Logic SAS'. Windows PE does not include the Paravirtual driver and will not see any hard disks.
Boot from Windows installation disk (Windows 7 and Windows Server 2008 R2 upwards)
X:\Sources> sfc /scannow
Use the full command if you get the error message "there is a system repair pending which requires reboot to complete":
X:\Sources> sfc /scannow /OffBootDir=C:\ /OffWinDir=C:\Windows
Check file system integrity
X:\Sources> chkdsk /x /v c:
When having access to the boot menu you are able to choose between several boot options:
In the command prompt of the Win PE execute:
X:\Sources> bcdedit /set {bootmgr} displaybootmenu yes
X:\Sources> bcdedit /set {bootmgr} timeout 10
If you only execute line 1 the system will continue the boot after 30 seconds of waiting for user input.
"In the system’s registry, the system’s configuration settings are stored under HKLM\System\CurrentControlSet\Control along with driver and service configurations stored under HKLM\System\CurrentControlSet\Services. Any change to these locations in the registry can render the system unbootable. If you happen to be in a situation where you are unable to boot into your Windows operating system normally, your system may have encountered a damaging change to the system’s registry prior to its last shutdown or reboot. In order to troubleshoot the issue, you have the opportunity to boot into the Last Known Good option by hitting F8 during the boot process. This will bring you to the Windows Advanced options Menu screen as shown below."
Source: https://blogs.technet.microsoft.com/askcore/2011/08/05/last-known-good/
# shutdown -r now
# shutdown -h now
In case you experience any issues with any of your Linux systems like VMware VCSA, vROps, vCOps, Apache webserver
To check disks and raid arrays attached:
# fdisk -l | less
To get an overview about the file system types (Linux, swap) of attached devices:
# cat /proc/mounts
# ls /mnt
To get an overview about the file systems (ext4, ext3 and xfs) of attached devices:
# blkid
To check and repair (-f) an ext4 formatted file system.
!! The devices need to be unmounted! Don't try to repair mounted devices. There is a high chance to damage the file system even further !!
# fsck.ext4 -f /dev/sda1
# e2fsck -f /dev/sda1
To check a swap disk for bad blocks in verbose mode (-v) and output bad sectors detected
# badblocks -v /dev/sda2
Create a software based caching device with SSD and HDD. This procedure is the same for NVMe (SSM) storage modules. In this case you need to create a Linux software raid over both modules first.
You should consider providing at least two hdd being mirrored and at least one SSD able to act as caching device (writethrough). In case you would like to use the caching device also to cache written data (writeback) you should provide at least two ssd.
In this scenario 'sda' will be the backing device and 'sdc' the caching device with the following specs:
sda=12 TB R10 array powered by 8x 3TB HDD WD Green-AV 5400rpm
sdc= 550 GB R10 array powered by 4x 275GB SSD Crucial MX300
Before you begin you should make sure you meet the following requirements:
Before you start make sure you will format the correct devices in your system by checking the devices with fdisk:
# fdisk -l | less
Compare the device sizes with the volumes you had created in your HBA or RAID controller.
# make-bcache -C /dev/sdc
# echo "/dev/sdc" > /sys/fs/bcache/register
# make-bcache -B /dev/sda
# echo "/dev/sda" > /sys/fs/bcache/register
Make sure you have no further data on sda and sdc since the devices will be formatted during the make-bcache process!
# bcache-super-show /dev/sdc
# echo "6d4ab278-0844-4a50-8e74-87aeda4fd353" > /sys/block/sda/bcache/attach
The UUID comes from 'cset.uuid' in the bcache-super-show command. The device '/dev/bcache0' should be created now!
Check which caching mode is enabled:
# cat /sys/block/bcache0/bcache/cache_mode
In case [writethrough] is selected you may want to switch to writeback mode:
# echo writeback > /sys/block/bcache0/bcache/cache_mode
When switching the caching mode from writeback to writethrough and dirty cache is existing on the caching device it will be flushed automatically to the backing device. After switching the caching mode make sure the cache is clean:
# echo writethrough > /sys/block/bcache0/bcache/cache_mode
$ watch cat /sys/block/bcache0/bcache/state
In case the cache stays being dirty you may want to force the flush by executing:
# echo 0 > /sys/block/bcache0/bcache/writeback_percent
$ watch cat /sys/block/bcache0/bcache/state
When becoming clean revert back the 'writeback_percent' value to 10:
# echo 10 > /sys/block/bcache0/bcache/writeback_percent
Check that everything has been correctly setup
# cat /sys/block/bcache0/bcache/state
# mkfs.xfs /dev/bcache0
# mkfs.xfs /dev/bcache0 -L "fs1-bc0"
With the '-L' parameter you can label the new file system already during the creation process. But you are good to go just when executing the first line.
# cd /mnt/vdisks/
# mkdir fs1-bc0
# ls
# mount /dev/bcache0 /mnt/vdisks/fs1-bc0/
# vi /etc/fstab
Add the entry:
LABEL=fs1-bc0 /mnt/vdisks/fs1-bc0 xfs defaults 1 1
Check whether the file "vd" was created at /mnt/vdisks/fs1-bc0/
# cd /mnt/vdisks/fs1-bc0/
# ls
Is a "vd" file visible at "/mnt/vdisks/fs1-bc0/"?
If you want to permanently destroy the bcache volume, you need to wipe the bcache superblock from the underlying device. This operation is not exposed through the sysfs interface. So:
# echo 1 > /sys/block/<device>/bcache/stop
# head -c 1m /dev/zero > /dev/<caching device>
# head -c 1m /dev/zero > /dev/<backing device>
For some reason wipefs is not included in ESOS 0.1.9 being used during this tutorial.
"If you have a sufficiently new version of util-linux, you can use wipefs instead, which is more precise in wiping the bcache signature: wipefs -a /dev/<device>.) Obviously, you need to be careful to select the right device because this is a destructive operation that will wipe the header of the device. Take note that you will no longer have access to any data in the bcache volume!" Source: http://unix.stackexchange.com/questions/225017/how-to-remove-bcache0-volume
So in case the ESXi logs are stored on an ESOS storage switch it over to a storage device which is still accessible when ESOS is being rebooted!
# cat /sys/block/bcache0/bcache/state
If the cache shows up as dirty you need to flush the dirty cache by:
# echo 0 > /sys/block/bcache0/bcache/writeback_percent
Observe the flushing process with:
# watch cat /sys/block/bcache0/bcache/dirty_data
As soon you see '0.0' you should also change the cache_mode to 'writethrough' by:
# echo writethrough > /sys/block/bcache0/bcache/cache_mode
Verify this change by:
# cat /sys/block/bcache0/bcache/cache_mode
As soon you see 'writethrough' in brackets you can reboot ESOS.
# reset
In case SCST is unable to find the mount point defined in the /etc/fstab file (e.g. when loading the entry LABEL=fs1-bc0 /mnt/vdisks/fs1-bc0 xfs defaults 1 1) this means that the directories of the mount points do not exists. (For some reason ESOS deletes these directories during shutdown process and is not creating thus not being able to mount the devices.) In this case proceed with step 5.
# mkdir /mnt/vdisks/fs1-bc0
# mkdir /mnt/vdisks/fs2-ssd
# mount /dev/bcache0 /mnt/vdisks/fs1-bc0
# mount /dev/sdb1 /mnt/vdisks/fs2-ssd
# /etc/rc.d/rc.scst start
# exit
Note: /etc/fstab gives you orientation which folders need to be created!
ESOS should be running now. :)
To automate the directory creation you can add these lines to the config file /etc/pre-scst_xtra_conf:
mkdir /mnt/vdisks/fs1-bc0
mkdir /mnt/vdisks/fs2-ssd
mount /dev/bcache0 /mnt/vdisks/fs1-bc0
mount /dev/sdb1 /mnt/vdisks/fs2-ssd
In case the file does not exist create /etc/pre-scst_xtra_conf and insert the four lines as shown above:
# touch /etc/pre-scst_xtra_conf
# vi /etc/pre-scst_xtra_conf