Gaurav Mantri's Personal Blog.

What’s New In Azure Storage

It’s been a while that I wrote a blog post about Azure Storage :).

Earlier this month, Azure Storage Team released a new version of Storage Service and included a lot of awesome goodness! In this blog post, I will try to summarize those.

So let’s start!

New Storage Service REST API Version / Client Library

All the new changes are rolled into a new version of REST API. The new version is “2015-02-21”. There are some breaking changes in this new version, so if you want to use the new stuff please make sure that you use the latest version of REST API.

This version included a new blob type called “Append Blob” and tons of new features in Azure File Service.

Along with the new REST API, Azure Storage team also released a new version of Storage Client Library – version 5.0.0. This version implements all the features available in the latest version of REST API. You can get this library in your projects from Nuget: https://www.nuget.org/packages/WindowsAzure.Storage/.

Append Blob

Append Blob is the newest kid on the block :). Previously there were two kinds of blobs available in Azure Storage – Block Blob and Page Blob. Now there are three.

As the name suggest, in an Append Blob content is always appended to the blob. It is ideally suited for storing logging or telemetry data. Even though your could implement this kind of functionality with Block blobs as well, but append blobs make it super easy for you to collect the logging data.

Let’s consider a scenario where you want to collect logging data from your web application and store it in blob storage. Furthermore, assume that you want just one file per day. The way you would do this with Append Blob is you first create an empty append blob and as the data comes in, you would simply write that to the blob. Append Blob will make sure that existing data is not overwritten and the new content you are sending in gets written to the end of the blob.

To manage Append Blob, in .Net Storage Client library a new class is created – CloudAppendBlob [Sorry, but MSDN documentation is not updated just yet]. The way you work with CloudAppendBlob is very much similar to the way you work with CloudBlockBlob or CloudPageBlob.

        static void CreateEmptyAppendBlob()
        {
            var account = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
            var blobClient = account.CreateCloudBlobClient();
            var container = blobClient.GetContainerReference("logs-container");
            container.CreateIfNotExists();
            var logBlob = container.GetAppendBlobReference(DateTime.UtcNow.Date.ToString("yyyy-MM-dd") + ".log");
            logBlob.CreateOrReplace();
            container.DeleteIfExists();
        }
        static void WriteToAppendBlob()
        {
            var account = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
            var blobClient = account.CreateCloudBlobClient();
            var container = blobClient.GetContainerReference("logs-container");
            container.CreateIfNotExists();
            var logBlob = container.GetAppendBlobReference(DateTime.UtcNow.Date.ToString("yyyy-MM-dd") + ".log");
            logBlob.CreateOrReplace();
            logBlob.AppendText(string.Format("[{0}] - some log entry", DateTime.UtcNow));
            container.DeleteIfExists();
        }

Append Blob supports all operations supported by other blob types. You can copy append blobs, take snapshots, view/update metadata, view/update properties, download etc.

Some Important Notes:

  • Append Blobs are only supported in “2015-02-21” version of REST API. Thus if you want to use Append Blobs, you must use the latest version of REST API.
  • Assuming you have a blob container that has block, page and append blobs in it. You must use the latest version of the REST API. Blob enumeration will fail at the REST API level itself if you use an older version of the REST API.

Shared Access Signature (SAS) Change

If you’re using REST API to create SAS, there’s one breaking change in there in the way “canonicalized resource” is created when constructing the string to sign. In the latest version, you must prepend the service name (blob, table, queue or file) to the canonicalized resource. For example, if the URL for which you want to create a SAS is “https://myaccount.blob.core.windows.net/music”:

In previous versions, canonicalized resource would be:

/myaccount/music

But in the new version, it would be:

blob/myaccount/music

You can learn more about Shared Access Signature here: https://msdn.microsoft.com/en-us/library/azure/dn140255.aspx.

File Service Changes

This is where fun begins :). There are a number of changes done at the File Service. Let’s talk about them!

However please note that File Service is still in preview and thus is not enabled by default in your storage account/subscription. You would need to enable File Service in your subscription by visiting account management portal.

CORS

As you may already know, other Storage Services (Blobs, Queues and Tables) have been supporting CORS for a long time now (this, along with SAS has been foundation of Cloud Portam). Now File Service also supports CORS! A BIG Yay!!!

CORS for File Service works the same way as that for other services:

  • CORS rules are applied at the service level.
  • There can be a maximum of 5 CORS rules for File Service.
  • Each CORS rule will have a list of allowed origins, HTTP verbs, a list of allowed (request) & exposed (response) headers and max age in seconds.

Let’s take an example as to how you will set a CORS rule for File Service. In this example, I will use the CORS rule required for Cloud Portam. For Cloud Portam, we need the following CORS rule set:

Allowed Origins https://app.cloudportam.com
Allowed Verbs Get, Header, Post, Put, Delete, Trace, Options, Connect, and Merge
Allowed Headers *
Exposed Headers *
Max Age 3600
        static void SetFileServiceCorsRule()
        {
            var account = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
            var fileClient = account.CreateCloudFileClient();
            var serviceProperties = new Microsoft.WindowsAzure.Storage.File.Protocol.FileServiceProperties();
            CorsRule corsRule = new CorsRule()
            {
                AllowedOrigins = new List<string>() { "https://app.cloudportam.com"},
                AllowedMethods = CorsHttpMethods.Connect | CorsHttpMethods.Delete | CorsHttpMethods.Get | 
                                    CorsHttpMethods.Head | CorsHttpMethods.Merge | CorsHttpMethods.Options | 
                                    CorsHttpMethods.Post | CorsHttpMethods.Put | CorsHttpMethods.Trace,
                AllowedHeaders = new List<string>() { "*" },
                ExposedHeaders = new List<string>() { "*" },
                MaxAgeInSeconds = 3600
            };
            serviceProperties.Cors.CorsRules.Add(corsRule);
            fileClient.SetServiceProperties(serviceProperties);
        }

Here’s an example of how you would read the CORS rules currently set for File Service.

        static void GetFileServiceCorsRule()
        {
            var account = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
            var fileClient = account.CreateCloudFileClient();
            var serviceProperties = fileClient.GetServiceProperties();
            var corsRules = serviceProperties.Cors.CorsRules;
            foreach (var corsRule in corsRules)
            {
                Console.WriteLine("Allowed Origins: " + string.Join(", ", corsRule.AllowedOrigins));
                List<string> allowedMethods = new List<string>();
                if (corsRule.AllowedMethods.HasFlag(CorsHttpMethods.Connect))
                {
                    allowedMethods.Add(CorsHttpMethods.Connect.ToString());
                }
                if (corsRule.AllowedMethods.HasFlag(CorsHttpMethods.Delete))
                {
                    allowedMethods.Add(CorsHttpMethods.Delete.ToString());
                }
                if (corsRule.AllowedMethods.HasFlag(CorsHttpMethods.Get))
                {
                    allowedMethods.Add(CorsHttpMethods.Get.ToString());
                }
                if (corsRule.AllowedMethods.HasFlag(CorsHttpMethods.Head))
                {
                    allowedMethods.Add(CorsHttpMethods.Head.ToString());
                }
                if (corsRule.AllowedMethods.HasFlag(CorsHttpMethods.Merge))
                {
                    allowedMethods.Add(CorsHttpMethods.Merge.ToString());
                }
                if (corsRule.AllowedMethods.HasFlag(CorsHttpMethods.Options))
                {
                    allowedMethods.Add(CorsHttpMethods.Options.ToString());
                }
                if (corsRule.AllowedMethods.HasFlag(CorsHttpMethods.Post))
                {
                    allowedMethods.Add(CorsHttpMethods.Post.ToString());
                }
                if (corsRule.AllowedMethods.HasFlag(CorsHttpMethods.Put))
                {
                    allowedMethods.Add(CorsHttpMethods.Put.ToString());
                }
                if (corsRule.AllowedMethods.HasFlag(CorsHttpMethods.Trace))
                {
                    allowedMethods.Add(CorsHttpMethods.Trace.ToString());
                }
                Console.WriteLine("Allowed Methods: " + string.Join(", ", allowedMethods));
                Console.WriteLine("Allowed Headers: " + string.Join(", ", corsRule.AllowedHeaders));
                Console.WriteLine("Exposed Headers: " + string.Join(", ", corsRule.ExposedHeaders));
                Console.WriteLine("Max Age (in Seconds): " + corsRule.MaxAgeInSeconds);
            }
        }

Share Quota

Now you can define a quota for a share. The quota will restrict the maximum size of that share. The value of a share quota must be between 1 MB and 5 GB.

You can set the quota of a share when you are creating it. You can also update the quota later on as well by changing share’s properties.

        static void CreateShareWithQuotaAndUpdateIt()
        {
            var account = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
            var fileClient = account.CreateCloudFileClient();
            var share = fileClient.GetShareReference("share-name");
            share.Properties.Quota = 128;//Set share quota to 128 MB.
            share.CreateIfNotExists();
            //Fetch the share's attributes
            share.FetchAttributes();
            Console.WriteLine("Share's Quota = " + share.Properties.Quota);
            //Now let's update the share quota
            share.Properties.Quota = 1024;//Set share quota to 1 GB
            share.SetProperties();
            //Fetch the share's attributes
            share.FetchAttributes();
            Console.WriteLine("Share's Quota = " + share.Properties.Quota);
            share.DeleteIfExists();
        }

Please note that if you don’t set the quota while creating a share, it’s quota will be set as 5 GB (i.e. maximum value).

However if you call “SetProperties()” on a share but don’t provide a value for quota, it’s value is not changed.

Share Usage

Another neat feature introduced in storage is the ability to view share usage i.e. how much of the share quota has been used. Please note that this is an approximate value only.

        static void GetShareUsage()
        {
            var account = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
            var fileClient = account.CreateCloudFileClient();
            var share = fileClient.GetShareReference("share-name");
            share.Properties.Quota = 128;//Set share quota to 128 MB.
            share.CreateIfNotExists();
            var shareUsage = share.GetStats().Usage;
            Console.WriteLine("Share Usage (in MB): " + shareUsage);
            share.DeleteIfExists();
        }

Share Access Policies

Another important feature that has been missing from File Service till now. Before the current version, there was no anonymous access to File Service Shares and Files. In order to perform any operation on File Service, you would need account key.

With the introduction of Share Access Policies and Shared Access Signature support, it is now possible to perform certain operations on Shares and Files without using account key.

Share Access Policies work in the same way as Blob Container Access Policies:

  • There can be a maximum of 5 access policies per share.
  • Each access policy must have a unique identifier and optionally can have a start/end date and permissions (Read, Write, List, and Delete).
  • When using an access policy to create a Shared Access Signature, only the missing parameters need to be specified. For example, if an access policy has start date defined you can’t specify a start date in your Shared Access Signature.

Let’s see how you can create a Shared Access Policy on a share. In this example, we’re creating an access policy with all permissions (Read, Write, List and Delete) and an expiry date of 24 hours from current date/time.

        static void SetShareAccessPolicy()
        {
            var account = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
            var fileClient = account.CreateCloudFileClient();
            var share = fileClient.GetShareReference("share-name");
            share.CreateIfNotExists();
            var permissions = new Microsoft.WindowsAzure.Storage.File.FileSharePermissions();
            var sharedAccessFilePolicy = new Microsoft.WindowsAzure.Storage.File.SharedAccessFilePolicy()
            {
                Permissions = Microsoft.WindowsAzure.Storage.File.SharedAccessFilePermissions.Read | Microsoft.WindowsAzure.Storage.File.SharedAccessFilePermissions.Write | 
                                Microsoft.WindowsAzure.Storage.File.SharedAccessFilePermissions.List | Microsoft.WindowsAzure.Storage.File.SharedAccessFilePermissions.Delete,
                SharedAccessExpiryTime = new DateTimeOffset(DateTime.UtcNow.AddDays(1))
            };
            var accessPolicyIdentifier = "policy-1";
            permissions.SharedAccessPolicies.Add(new KeyValuePair<string,Microsoft.WindowsAzure.Storage.File.SharedAccessFilePolicy>(accessPolicyIdentifier, sharedAccessFilePolicy));
            share.SetPermissions(permissions);
        }
        static void GetShareAccessPolicy()
        {
            var account = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
            var fileClient = account.CreateCloudFileClient();
            var share = fileClient.GetShareReference("share-name");
            share.CreateIfNotExists();
            var permissions = share.GetPermissions();
            var accessPolicies = permissions.SharedAccessPolicies;
            foreach (var item in accessPolicies)
            {
                Console.WriteLine("Identifier: " + item.Key);
                var accessPolicy = item.Value;
                Console.WriteLine("Start Time: " + accessPolicy.SharedAccessStartTime);
                Console.WriteLine("Expiry Time: " + accessPolicy.SharedAccessExpiryTime);
                Console.WriteLine("Read Permission: " + accessPolicy.Permissions.HasFlag(Microsoft.WindowsAzure.Storage.File.SharedAccessFilePermissions.Read));
                Console.WriteLine("Write Permission: " + accessPolicy.Permissions.HasFlag(Microsoft.WindowsAzure.Storage.File.SharedAccessFilePermissions.Write));
                Console.WriteLine("List Permission: " + accessPolicy.Permissions.HasFlag(Microsoft.WindowsAzure.Storage.File.SharedAccessFilePermissions.List));
                Console.WriteLine("Delete Permission: " + accessPolicy.Permissions.HasFlag(Microsoft.WindowsAzure.Storage.File.SharedAccessFilePermissions.Delete));
            }
            share.DeleteIfExists();
        }

Shared Access Signature

With Access Policies comes Shared Access Signature (SAS) :). This is another important improvement done in File Service. Now you can create SAS URL for File Service Shares and Files.

SAS for File Service works much like SAS for Blob Containers and Blobs:

  • You can create both SAS without access policies (ad-hoc SAS) or SAS with access policies.
  • For SAS, you define start date/time (optional), end date/time and at least one of Read, Write, List or Delete permission. If you’re using an access policy to define a SAS, then you only specify the parameters which are not present in that access policy.
  • When creating a SAS on a Share, following permissions are applicable: Read, Write, List and Delete however when creating a SAS on a file in a share, List permission is not applicable.

Let’s see how you can create a SAS on a share. In this example, we will create an ad-hoc SAS with just “List” permission that will expire 24 hours from current date/time.

        static void CreateSasOnShare()
        {
            var account = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
            var fileClient = account.CreateCloudFileClient();
            var share = fileClient.GetShareReference("share-name");
            share.CreateIfNotExists();
            var sasToken = share.GetSharedAccessSignature(new Microsoft.WindowsAzure.Storage.File.SharedAccessFilePolicy()
                {
                    Permissions = Microsoft.WindowsAzure.Storage.File.SharedAccessFilePermissions.List,
                    SharedAccessExpiryTime = new DateTimeOffset(DateTime.UtcNow.AddDays(1))
                });
            var sasUrl = share.Uri.AbsoluteUri + sasToken;
            Console.WriteLine(sasUrl);
            share.DeleteIfExists();
        }

Now let’s see how you create a SAS on a file in a share. In this example, we will create an ad-hoc SAS with just “Read” permission that will expire 24 hours from current date/time.

        static void CreateSasOnFile()
        {
            var account = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
            var fileClient = account.CreateCloudFileClient();
            var share = fileClient.GetShareReference("share-name");
            share.CreateIfNotExists();
            var file = share.GetRootDirectoryReference().GetFileReference("myfile.txt");
            file.UploadText("This is sample file!");
            var sasToken = file.GetSharedAccessSignature(new Microsoft.WindowsAzure.Storage.File.SharedAccessFilePolicy()
            {
                Permissions = Microsoft.WindowsAzure.Storage.File.SharedAccessFilePermissions.Read,
                SharedAccessExpiryTime = new DateTimeOffset(DateTime.UtcNow.AddDays(1))
            });
            var sasUrl = file.Uri.AbsoluteUri + sasToken;
            Console.WriteLine(sasUrl);
            //Now let's read this file by making an HTTP Web Request using SAS URL.
            var request = (HttpWebRequest) HttpWebRequest.Create(sasUrl);
            request.Method = "GET";
            using (var response = (HttpWebResponse) request.GetResponse())
            {
                using (var streamReader = new StreamReader(response.GetResponseStream()))
                {
                    var fileContents = streamReader.ReadToEnd();
                    Console.WriteLine(fileContents);
                }
            }
            share.DeleteIfExists();
        }

Directory Metadata

In the previous versions, Storage Service allowed you to define metadata on a share and a file but not a directory. In this release, they have enabled this functionality. Now you can define custom metadata on a directory in the form of key/value pair. You can set metadata when creating a directory and update it later on.

Rules for metadata on a directory are same as that for a share and a file:

  • Metadata key must be a valid C# identifier.
  • Size of metadata cannot exceed 8KB.

Let’s see how you can set metadata on a directory.

        static void DirectoryMetadata()
        {
            var account = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
            var fileClient = account.CreateCloudFileClient();
            var share = fileClient.GetShareReference("share-name");
            share.CreateIfNotExists();
            var directory = share.GetRootDirectoryReference().GetDirectoryReference("folder");
            directory.Metadata.Add(new KeyValuePair<string, string>("Key1", "Value1"));
            directory.Metadata.Add(new KeyValuePair<string, string>("Key2", "Value2"));
            directory.CreateIfNotExists();
            //Fetch directory attributes
            directory.FetchAttributes();
            var metadata = directory.Metadata;
            foreach (var item in metadata)
            {
                Console.WriteLine("Key = " + item.Key + "; Value = " + item.Value);
            }
            Console.WriteLine("----------------------------------------");
            //Now let's update the metadata
            directory.Metadata.Add(new KeyValuePair<string, string>("Key3", "Value3"));
            directory.Metadata.Add(new KeyValuePair<string, string>("Key4", "Value4"));
            directory.SetMetadata();
            //Fetch directory attributes
            directory.FetchAttributes();
            metadata = directory.Metadata;
            foreach (var item in metadata)
            {
                Console.WriteLine("Key = " + item.Key + "; Value = " + item.Value);
            }
            share.DeleteIfExists();
        }

Copy Files

This is yet another important feature introduced in the latest API. Essentially this functionality provides server-side async copy functionality for copying files across different shares within or across storage accounts. Not only that, you can now copy files from your File Service shares to blob containers and vice versa.

Unfortunately I haven’t played with it much to include some examples but I will update this post with more details as I learn more about this functionality.

Wish List

Even though the new features introduced are very impressive, there are still some things I think are missing from the API. Some of the items from my wish list are:

  • Ability to recursively list files – Currently File Service just list the files and directories inside a share or a directory. I wish the storage team include a functionality wherein I could list all files inside a share irrespective of nested directory hierarchy.
  • Ability to delete non-empty folder – Currently in order to delete a folder, it must be completely empty. I wish storage team include a functionality wherein I could delete a non-empty folder.
  • Ability to copy a folder – Currently copy functionality works only for a file. I wish storage team include copy folder functionality.

These are some items from my wish list. If you have a wish list, please share them by providing comments below.

Cloud Portam

Most of the things I mentioned above are either already there in Cloud Portam or will be there soon. So if you are looking for a tool to use these features, please give Cloud Portam a try. The website address is http://www.cloudportam.com.

Summary

That’s it for this post. I hope you have found the post useful. If you find any issues with the post, please let me know and I will fix them ASAP.

Happy Coding!!!


[This is the latest product I'm working on]

Comments

  1. Gary Brouwers says:

    This version seems to be breaking the Storage Emulator for me. I’m getting the Bad Request error which seemed to plague people when the Storage Library was updated in 2013 (around 3.0 if I recall). I’ve tried updating the SDK to 2.6 (I have VS2012), but still the error persists. Without the emulator our development grinds to a halt. Is this a known issue or am I just special?

    • Hi Gary, What version of storage client library are you using?

      • Gary Brouwers says:

        I’m using 5.0.0. I’ve also tried the pre-release, 5.0.1 but had the same errors.

        • Gary…If I am not mistaken, you would need to downgrade client libraries to 4.x as well. You see, each version of storage client library corresponds to a storage service REST API version and same goes for storage emulator as well. If you use storage emulator version 4.0, then you can’t use storage client library 5 with that. You would need to use version 4.x with that. If you want to use storage client library version 5, then you have 2 options: 1) Make use of latest version of storage emulator or 2) Do your development against cloud storage account.

  2. Hello,
    I tried your examples and CreateSasOnFile works fine, however CreateSasOnShare does not seem to generate the proper URL. Indeed, when I paste sasUrl in IE, I get the following message:


    AuthenticationFailed
    Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature. RequestId:7e910159-001a-0070-8038-9489da000000 Time:2016-04-11T21:25:24.9735466Z
    Signature did not match. String to sign used was l 2016-04-11T21:28:46Z /file/mystorage/$root 2015-07-08

    Any idea as to what might cause the problem?

    Thank you.

    Marc

    • Hi Marc,

      The SAS URL is generated just fine. If you want to list the files and directories in a share using this SAS URL, you just need to append “&restype=directory&comp=list” at the end of your SAS URL (assuming your SAS token has list permissions).

      Hope this helps.

      • Hello,

        Thank very much for your reply, and sorry my late reply.

        You are right, your suggestion worked. This is the output I obtained from IE, for an Azure File storage share containing one file and one directory:




        MyTextFile.txt

        7564


        MyDirectory

        Now, what I actually need to do is, from C#, to access an Azure File storage share and perform operations such as listing directories and files, removing files, etc., in much the same way as I can do when using the storage account name and key, but using the SAS instead. This will have to be done from a client machine which should not have access to the storage account name and key.

        All the examples I have found are for blobs, not for files.

        I would greatly appreciate if you could provide me with some links or suggestions.

        Thanks and Regards,

        Marc

  3. Hello again,

    In order to perform operations on an Azure File storage share by making usage of a SAS URI instead of the storage account key, I tried this but get a server error:

    // Create a file share object based on a SAS URI previously obtained.
    CloudFileShare MyFileShare = new CloudFileShare(new Uri(MyFileShareSASUri));
    // Ensure that the share exists.
    if (MyFileShare.Exists()) { // Runtime error on this line: “The remote server returned an error: (403) Forbidden”
    // Perform operations on file share
    }

    I have confirmed in IE that the MyFileShareSASUri is sound and allows me access to the share.

    I would greatly appreciate your suggestions on this matter.

    Thanks again.

    Marc

    • Hello!

      I believe invoking the MyFileShare.Exists() method would require an Account SAS instead of a Service SAS. When in Visual Studio I skip this statement and proceed with listing the contents of the Azure File storage share, it succeeds, without the need to provide the storage account key other than “indirectly” via the Service SAS. So I am OK for now!

      I will experiment somewhat with the Account SAS discussed in your October 2015 post.

      Thank you for “sharing” all this information on “shared” access signatures! 🙂

      Marc