-->
Azure managed disks are block-level storage volumes that are managed by Azure and used with Azure Virtual Machines. Managed disks are like a physical disk in an on-premises server but, virtualized. With managed disks, all you have to do is specify the disk size, the disk type, and provision the disk. Once you provision the disk, Azure handles the rest.
IBM introduced the IBM 1301 disk storage unit on June 2, 1961, capable of storing 28 million characters. 1962: On October 11, 1962, IBM introduced the IBM 1311 disk storage drive, which stored 2 million characters. 1973: IBM shipped the 3340 Winchester hard drive with two. When Atari first introduced its Atari ST computer line, it also shortly there after released a hard drive for your computers. It was called the SH204. SH standing for ST Line Hard disk and 204 for a 20MB storage unit. As for the '4' designation, it is unknown. Perhaps it was version 4 of the design. Lots of storage for less than the competition, attractive styling, and good performance with small files highlight this USB 3.0 portable hard drive. An excellent bargain.
The available types of disks are ultra disks, premium solid-state drives (SSD), standard SSDs, and standard hard disk drives (HDD). For information about each individual disk type, see Select a disk type for IaaS VMs.
Benefits of managed disks
Let's go over some of the benefits you gain by using managed disks.
Highly durable and available
Managed disks are designed for 99.999% availability. Managed disks achieve this by providing you with three replicas of your data, allowing for high durability. If one or even two replicas experience issues, the remaining replicas help ensure persistence of your data and high tolerance against failures. This architecture has helped Azure consistently deliver enterprise-grade durability for infrastructure as a service (IaaS) disks, with an industry-leading ZERO% annualized failure rate.
Simple and scalable VM deployment
Using managed disks, you can create up to 50,000 VM disks of a type in a subscription per region, allowing you to create thousands of VMs in a single subscription. This feature also further increases the scalability of virtual machine scale sets by allowing you to create up to 1,000 VMs in a virtual machine scale set using a Marketplace image.
Integration with availability sets
Managed disks are integrated with availability sets to ensure that the disks of VMs in an availability set are sufficiently isolated from each other to avoid a single point of failure. Disks are automatically placed in different storage scale units (stamps). If a stamp fails due to hardware or software failure, only the VM instances with disks on those stamps fail. For example, let's say you have an application running on five VMs, and the VMs are in an Availability Set. The disks for those VMs won't all be stored in the same stamp, so if one stamp goes down, the other instances of the application continue to run. Unreal tournament 436 no cd.
Integration with Availability Zones
Managed disks support Availability Zones, which is a high-availability offering that protects your applications from datacenter failures. Availability Zones are unique physical locations within an Azure region. Each zone is made up of one or more datacenters equipped with independent power, cooling, and networking. To ensure resiliency, there's a minimum of three separate zones in all enabled regions. With Availability Zones, Azure offers industry best 99.99% VM uptime SLA.
Azure Backup support
To protect against regional disasters, Azure Backup can be used to create a backup job with time-based backups and backup retention policies. This allows you to perform VM or managed disk restorations at will. Currently Azure Backup supports disk sizes up to 32 tebibyte (TiB) disks. Learn more about Azure VM backup support.
Hard Disk Storage Units Storage
Azure Disk Backup
Azure Backup offers Azure Disk Backup (preview) as a native, cloud-based backup solution that protects your data in managed disks. It's a simple, secure, and cost-effective solution that enables you to configure protection for managed disks in a few steps. Azure Disk Backup offers a turnkey solution that provides snapshot lifecycle management for managed disks by automating periodic creation of snapshots and retaining it for configured duration using backup policy. For details on Azure Disk Backup, see Overview of Azure Disk Backup (in preview).
Granular access control
You can use Azure role-based access control (Azure RBAC) to assign specific permissions for a managed disk to one or more users. Managed disks expose a variety of operations, including read, write (create/update), delete, and retrieving a shared access signature (SAS) URI for the disk. You can grant access to only the operations a person needs to perform their job. For example, if you don't want a person to copy a managed disk to a storage account, you can choose not to grant access to the export action for that managed disk. Similarly, if you don't want a person to use an SAS URI to copy a managed disk, you can choose not to grant that permission to the managed disk.
Upload your vhd
Direct upload makes it easy to transfer your vhd to an Azure managed disk. Previously, you had to follow a more involved process that included staging your data in a storage account. Now, there are fewer steps. It is easier to upload on premises VMs to Azure, upload to large managed disks, and the backup and restore process is simplified. It also reduces cost by allowing you to upload data to managed disks directly without attaching them to VMs. You can use direct upload to upload vhds up to 32 TiB in size.
To learn how to transfer your vhd to Azure, see the CLI or PowerShell articles.
Security
Private Links
Private Link support for managed disks can be used to import or export a managed disk internal to your network. Private Links allow you to generate a time bound Shared Access Signature (SAS) URI for unattached managed disks and snapshots that you can use to export the data to other regions for regional expansion, disaster recovery, and forensic analysis. You can also use the SAS URI to directly upload a VHD to an empty disk from on-premises. Now you can leverage Private Links to restrict the export and import of managed disks so that it can only occur within your Azure virtual network. Private Links allows you to ensure your data only travels within the secure Microsoft backbone network.
To learn how to enable Private Links for importing or exporting a managed disk, see the CLI or Portal articles.
Encryption
Managed disks offer two different kinds of encryption. The first is Server Side Encryption (SSE), which is performed by the storage service. The second one is Azure Disk Encryption (ADE), which you can enable on the OS and data disks for your VMs.
Server-side encryption
Server-side encryption provides encryption-at-rest and safeguards your data to meet your organizational security and compliance commitments. Server-side encryption is enabled by default for all managed disks, snapshots, and images, in all the regions where managed disks are available. (Temporary disks, on the other hand, are not encrypted by server-side encryption unless you enable encryption at host; see Disk Roles: temporary disks).
You can either allow Azure to manage your keys for you, these are platform-managed keys, or you can manage the keys yourself, these are customer-managed keys. Visit the Server-side encryption of Azure Disk Storage article for details.
Azure Disk Encryption
Azure Disk Encryption allows you to encrypt the OS and Data disks used by an IaaS Virtual Machine. This encryption includes managed disks. For Windows, the drives are encrypted using industry-standard BitLocker encryption technology. For Linux, the disks are encrypted using the DM-Crypt technology. The encryption process is integrated with Azure Key Vault to allow you to control and manage the disk encryption keys. For more information, see Azure Disk Encryption for Linux VMs or Azure Disk Encryption for Windows VMs.
Disk roles
There are three main disk roles in Azure: the data disk, the OS disk, and the temporary disk. These roles map to disks that are attached to your virtual machine.
Data disk
A data disk is a managed disk that's attached to a virtual machine to store application data, or other data you need to keep. Data disks are registered as SCSI drives and are labeled with a letter that you choose. Each data disk has a maximum capacity of 32,767 gibibytes (GiB). The size of the virtual machine determines how many data disks you can attach to it and the type of storage you can use to host the disks.
OS disk
Every virtual machine has one attached operating system disk. That OS disk has a pre-installed OS, which was selected when the VM was created. This disk contains the boot volume.
This disk has a maximum capacity of 4,095 GiB.
Temporary disk
Most VMs contain a temporary disk, which is not a managed disk. The temporary disk provides short-term storage for applications and processes, and is intended to only store data such as page or swap files. Data on the temporary disk may be lost during a maintenance event or when you redeploy a VM. During a successful standard reboot of the VM, data on the temporary disk will persist. For more information about VMs without temporary disks, see Azure VM sizes with no local temporary disk.
On Azure Linux VMs, the temporary disk is typically /dev/sdb and on Windows VMs the temporary disk is D: by default. The temporary disk is not encrypted by server side encryption unless you enable encryption at host.
Managed disk snapshots
A managed disk snapshot is a read-only crash-consistent full copy of a managed disk that is stored as a standard managed disk by default. With snapshots, you can back up your managed disks at any point in time. These snapshots exist independent of the source disk and can be used to create new managed disks.
Snapshots are billed based on the used size. For example, if you create a snapshot of a managed disk with provisioned capacity of 64 GiB and actual used data size of 10 GiB, that snapshot is billed only for the used data size of 10 GiB. You can see the used size of your snapshots by looking at the Azure usage report. For example, if the used data size of a snapshot is 10 GiB, the daily usage report will show 10 GiB/(31 days) = 0.3226 as the consumed quantity.
To learn more about how to create snapshots for managed disks, see the following resources:
Images
Managed disks also support creating a managed custom image. You can create an image from your custom VHD in a storage account or directly from a generalized (sysprepped) VM. This process captures a single image. This image contains all managed disks associated with a VM, including both the OS and data disks. This managed custom image enables creating hundreds of VMs using your custom image without the need to copy or manage any storage accounts.
For information on creating images, see the following articles:
Images versus snapshots
It's important to understand the difference between images and snapshots. With managed disks, you can take an image of a generalized VM that has been deallocated. This image includes all of the disks attached to the VM. You can use this image to create a VM, and it includes all of the disks.
A snapshot is a copy of a disk at the point in time the snapshot is taken. It applies only to one disk. If you have a VM that has one disk (the OS disk), you can take a snapshot or an image of it and create a VM from either the snapshot or the image.
A snapshot doesn't have awareness of any disk except the one it contains. This makes it problematic to use in scenarios that require the coordination of multiple disks, such as striping. Snapshots would need to be able to coordinate with each other and this is currently not supported.
Disk allocation and performance
The following diagram depicts real-time allocation of bandwidth and IOPS for disks, using a three-level provisioning system:
The first level provisioning sets the per-disk IOPS and bandwidth assignment. At the second level, compute server host implements SSD provisioning, applying it only to data that is stored on the server's SSD, which includes disks with caching (ReadWrite and ReadOnly) as well as local and temp disks. Finally, VM network provisioning takes place at the third level for any I/O that the compute host sends to Azure Storage's backend. With this scheme, the performance of a VM depends on a variety of factors, from how the VM uses the local SSD, to the number of disks attached, as well as the performance and caching type of the disks it has attached.
As an example of these limitations, a Standard_DS1v1 VM is prevented from achieving the 5,000 IOPS potential of a P30 disk, whether it is cached or not, because of limits at the SSD and network levels:
Azure uses prioritized network channel for disk traffic, which gets the precedence over other low priority of network traffic. This helps disks maintain their expected performance in case of network contentions. Similarly, Azure Storage handles resource contentions and other issues in the background with automatic load balancing. Azure Storage allocates required resources when you create a disk, and applies proactive and reactive balancing of resources to handle the traffic level. This further ensures disks can sustain their expected IOPS and throughput targets. You can use the VM-level and Disk-level metrics to track the performance and setup alerts as needed.
Refer to our design for high performance article, to learn the best practices for optimizing VM + Disk configurations so that you can achieve your desired performance
Next steps
If you'd like a video going into more detail on managed disks, check out: Better Azure VM Resiliency with Managed Disks.
Learn more about the individual disk types Azure offers, which type is a good fit for your needs, and learn about their performance targets in our article on disk types.
-->This article discusses how to check an NTFS file system's disk space allocation to discover offending files and folders or look for volume corruption in Microsoft Windows Server 2003-based computers.
Original product version: Windows Server 2003
Original KB number: 814594
Summary
NTFS supports many volume and file-level features that may lead to what appear to be lost or incorrectly reported free disk space. For example, an NTFS volume may suddenly appear to become full for no reason, and an administrator cannot find the cause or locate the offending folders and files. This may occur if malicious or unauthorized access to an NTFS volume where large files or a high quantity of small files are secretly copied has occurred. These files then have their NTFS permissions removed or restricted. This behavior may also occur after a computer malfunction or power outage occurs that cause volume corruption.
The disk space allocation of an NTFS volume may appear to be misreported for any of the following reasons:
- The NTFS volume's cluster size is too large for the average-sized files that are stored there.
- File attributes or NTFS permissions prevent Windows Explorer or a Windows command prompt from displaying or accessing files or folders.
- The folder path exceeds 255 characters.
- Folders or files contain invalid or reserved file names.
- NTFS metafiles (such as the Master File Table) have grown, and you cannot de-allocate them.
- Files or folders contain alternate data streams.
- NTFS corruption causes free space to be reported as in use.
- Other NTFS features may cause file-allocation confusion.
The following information can help you to optimize, repair, or gain a better understanding of how your NTFS volumes use disk space.
Cluster size is too large
Only files and folders that include internal NTFS metafiles like the Master File Table (MFT), folder indexes, and others can consume disk space. These files and folders consume all the file space allocations by using multiples of a cluster. A cluster is a collection of contiguous sectors. The cluster size is determined by the partition size when the volume is formatted.
For more information about clusters, see Default cluster size for NTFS, FAT, and exFAT.
When a file is created, it consumes a minimum of a single cluster of disk space, depending on the initial file size. When data is later added to a file, NTFS increases the file's allocation in multiples of the cluster size.
To determine the current cluster size and volume statistics, run a read-only chkdsk command from a command prompt. To do so, follow these steps:
Click Start, click Run, type cmd, and then click OK.
At the command prompt, type the command:
chkdsk d:
.Where d: is the letter of the drive that you want to check.
Click OK.
View the resulting output. For example:
4096543 KB total disk space. <--- Total formatted disk capacity.
2906360 KB in 19901 files. <--- Space used by user file data.
6344 KB in 1301 indexes. <--- Space used by NTFS indexes.
0 KB in bad sectors. <--- Space lost to bad sectors.
49379 KB in use by the system. <--- Includes MFT and other NTFS metafiles.
22544 KB occupied by the log file. <--- NTFS Log file - (Can be adjusted using chkdsk /L:size)
1134460 KB available on disk. <--- Available FREE disk space4096 bytes in each allocation unit. <--- Cluster Size. (4K)
1024135 total allocation units on disk. <--- Total Clusters on disk.
283615 allocation units available on disk. <--- Available free clusters.
Note
Multiply each value that the output reports in kilobytes (KB) by 1024 to determine accurate byte counts. For example: 2906360 x 1024 = 2,976,112,640 bytes. You can use this information to determine how your disk space is being used and the default cluster size.
To determine whether this is the optimal cluster size, you must determine the wasted space on your disk. To do so, follow these steps:
Click Start, click My Computer, and then double-click the drive letter (for example, D) of the volume in question to open the volume and display the folders and files that the root contains.
Click any file or folder, and then click Select All on the Edit menu.
With all the files and folders selected, right-click any file or folder, click Properties, and then click the General tab.
The General tab displays the total number of files and folders on the whole volume and provides two file size statistics: SIZE and SIZE ON DISK.
If you are not using NTFS compression for any files or folders contained on the volume, the difference between SIZE and SIZE ON DISK may represent some wasted space because the cluster size is larger than necessary. You may want to use a smaller cluster size so that the SIZE ON DISK value is as close to the SIZE value as possible. A large difference between the SIZE ON DISK and the SIZE value indicates that the default cluster size is too large for the average file size that you are storing on the volume.
You can only change the cluster size you are using by reformatting the volume. To do this, back up the volume, and then format the volume by using the format command and the /a
switch to specify the appropriate allocation. For example: format D: /a:2048
(This example uses a 2-KB cluster size).
Note
Alternately, you can enable NTFS compression to regain space that you lost because of an incorrect cluster size. However, this may result in decreased performance.
File attributes or NTFS permissions
Both Windows Explorer and the directory list command dir /a /s
display the total file and folder statistics for only those files and folders that you have permissions to access. By default, Files hidden files and protected operating system files are excluded. This behavior may cause Windows Explorer or the dir command to display inaccurate file and folder totals and size statistics.
To include these types of files in the overall statistics, change Folder Options. To do so, follow these steps:
- Click Start, click My Computer, and then double-click the drive letter (for example: D) of the volume. This opens the volume and displays the folders and files that the root contains.
- On the Tools menu, click Folder Options, and then click the View tab.
- Select the Show Hidden Files and Folders check box, and then click to clear the Hide protected operating system files check box.
- Click Yes when you receive the warning message, and then click the Apply button. This change permits Windows Explorer and the
dir /a /s
command to total all the files and folders that the volume contains that the user has permissions to access.
To determine the folders and files that you cannot access, follow these steps:
At the command prompt, create a text file from the output of the
dir /a /s
command.For example: At the command prompt, type the following command:
dir d: /a /s >c:d-dir.txt
.Start the Backup or Restore Wizard.
- Click Start, click Run, type ntbackup, and then click OK.
- Click Advanced Mode.
Synthesia music pack. Click Options on the Tools menu, click the Backup Log tab, click Detailed, and then click OK.
In the Backup Utility, click the Backup tab, and then select the check box for the whole volume that is affected (for example: D:), and then click Start Backup.
After the backup is complete, open the backup report and compare folder for folder the NTBackup log output with the d-dir.txt output that you saved in step 1.
Because backup can access all the files, its report may contain folders and files that Windows Explorer and the dir command do not display. You may find it easier to use the NTBackup interface to locate the volume without backing up the volume when you want to search for large files or folders that you cannot access by using Windows Explorer.
After you locate the files that you do not have access to, you can add or change permissions by using the Security tab while you view the properties of the file or folder in Windows Explorer. By default, you cannot access the System Volume Information folder. You must add the correct permissions to include the folder in the dir /a /s
command.
You may notice folders or files that do not have a Security tab. Or, you may not be able to re-assign permissions to the affected folders and files. You may receive the following error message when you try to access them:
D:folder_name is not accessible
Access is denied
If you have any such folders, contact Microsoft Product Support Services for additional help.
Invalid file names
Folders or files that contain invalid or reserved file names may also be excluded from file and folder statistics. Folders or files that contain leading or trailing spaces are valid in NTFS, but they are not valid from a Win32 subsystem point of view. Therefore, neither Windows Explorer nor a command prompt can reliably work with them.
You may not be able to rename or delete these files or folders. When you try to do so, you may receive one of the following error messages:
Error renaming file or folder
Cannot rename file: Cannot read from the source file or disk.
Or
Error deleting file or folder
Cannot delete file: Cannot read from the source file or disk.
If you have folders or files that you cannot delete or rename, contact Microsoft Product Support Services.
NTFS Master File Table (MFT) expansion
When an NTFS volume is created and formatted, NTFS metafiles are created. One of these metafiles is named the Master File Table (MFT). It is small when it is created (approximately 16 KB), but it grows as files and folders are created on the volume. When a file is created, it is entered in the MFT as a File Record Segment (FRS). The FRS is always 1024 bytes (1 KB). As files are added to the volume, the MFT grows. However, when files are deleted, the associated FRSs are marked as free for reuse, but the total FRSs and associated MFT allocation remains. That is why you do not regain the space used by the MFT after you delete a large number of files, .
To see exactly how large the MFT is, you can use the built-in defragmenter to analyze the volume. The resulting report provides detailed information about the size and number of fragments in the MFT.
For example:
Master File Table (MFT) fragmentation
Total MFT size = 26,203 KB
MFT record count = 21,444
Percent MFT in use = 81 %
Total MFT fragments = 4
However, for more complete information about how much space (overhead) the whole NTFS is using, run the chkdsk.exe command, and then view the output for the following line:
In use by system.
Currently, only third-party defragmenters consolidate unused MFT FRS records and reclaim unused MFT allocated space.
Computer Hard Disk
Alternate data streams
NTFS permits files and folders to contain alternate data streams. With this feature, you can associate multiple data allocations with a single file or folder. Ubuntu vm image. The use of alternate data streams on files and folders has the following limitations:
- Windows Explorer and the dir command do not report the data in alternate data streams as part of the file size or volume statistics. Instead, they show only the total bytes for the primary data stream.
- The output from chkdsk accurately reports the space that a user's data files use, including alternate data streams.
- Disk quotas accurately track and report all data stream allocations that are part of a user's data files.
- NTBackup records the number of bytes backed up in the backup log report. However it does not show which files contain alternate data streams. It also does not show accurate file sizes for files that include data in alternate streams.
NTFS file system corruption
In rare circumstances, the NTFS Metafiles $MFT or $BITMAP may become corrupted and result in lost disk space. You can identify and fix this issue by running the chkdsk /f
command against the volume. Toward the end of chkdsk, you receive the following message if you must adjust the $BITMAP:Correcting errors in the master file table's (MFT) BITMAP attribute. CHKDSK discovered free space marked as allocated in the volume bitmap. Windows has made corrections to the file system.
Other NTFS features that may cause file allocation confusion
NTFS also supports hard links and reparse points that permit you to create volume mount points and directory junctions. These additional NTFS features may cause confusion when you try to determine how much space a physical volume is consuming.
A hard link is a directory entry for a file regardless of where the file data is located on that volume. Every file has at least one hard link. On NTFS volumes, each file can have multiple hard links, and therefore a single file can appear in many folders (or even in the same folder with different names). Because all the links refer to the same file, programs can open any of the links and modify the file. A file is deleted from the file system only after all the links to it are deleted. After you create a hard link, programs can use it like any other file name.
Note
Windows Explorer and a command prompt show all linked files as being the same size, even though they all share the same data and do not actually use that amount of disk space.
Volume mount points and directory junctions permit an empty folder on an NTFS volume to point to the root or subfolder on another volume. Windows Explorer and a dir /s command follow the reparse point, count any files and folders on the destination volume, and then include them in the host volume's statistics. This may mislead you to believe that more space is being used on the host volume than what is actually being used.
In summary, you can use chkdsk output, NTBackup GUI or backup logs, and the viewing of disk quotas to determine how disk space is being used on a volume. However, Windows Explorer and the dir command have some limitations and drawbacks when used for this purpose.