Gaurav Mantri's Personal Blog.

Cosmos DB and Node SDK – Part II: Working with Containers

In the previous post, we saw how we can use Node SDK for Cosmos DB to work with databases. You can read that post here: https://gauravmantri.com/2019/06/04/cosmos-db-and-node-sdk-working-with-databases/.

In this post, we will see how we can manage containers in a Cosmos DB database using Node SDK.

What is a Container?

Simply put, a container is something that contains data and things like stored procedures, triggers, and user defined functions to manipulate that data programmatically on the server side.

Depending upon the API your Cosmos DB account is targeting, you will find some alternate names for a container. For example, for SQL API (formerly known as DocumentDB) a container is also called a collection. For Graph API, a container is also called a graph. Similarly for Table API, a container is also called a table.

You can learn more about containers here: https://docs.microsoft.com/en-us/azure/cosmos-db/databases-containers-items#azure-cosmos-containers.

Before We Begin

There are certain things we need to do before we can get started:

  • Please ensure that you’ve followed all steps identified in “Before We Begin” section in the previous post.
  • Please create a database in your account. Again you can refer to the code for creating a database in the previous post. For this post, I created a database called “mydatabase”.

Once the database is created, create a new file called “container-samples.js” and add following lines of code there:

const {Promise} = require('bluebird');
const {CosmosClient, IndexingMode, IndexKind, DataType} = require('@azure/cosmos');

const accountEndpoint = 'https://account-name.documents.azure.com:443/';
const accountKey = 'yM0g3KnPANPpBgKLi34OMz1UZ7Png2pjQrs209IrrQkyhtqZKmALludel1nizEOqeJMm1gavLb0dS0gAoMw3Pw==';
const databaseId = 'mydatabase';

/**
** Method to get client connection object.
**/
const getClient = () => {
   return new CosmosClient({
     endpoint: accountEndpoint,
     auth: {
       masterKey: accountKey
     }
   });
 };

Please make sure to use the values for your Cosmos DB account credentials.

We’re now all set to move forward!

Oh, and one more thing. Because we will be including code for both async/await and promises, we will just prefix the method name with the approach we’re using like we did in the previous post. For example, “listContainersAsync” and “listContainersPromise” for async/await and promise respectively.

List Containers

Let’s say we want to list all containers in a database. Here’s how we would go about doing that.

Using Async/Await

Here’s the code to list containers in a database using async/await:

const listContainersAsync = async () => {
  const client = getClient();
  const database = client.database(databaseId);
  const iterator = await database.containers.readAll().toArray();
  const containers = [];
  iterator.result.forEach((item) => {
    containers.push(item);
  });
  return containers;
};

First thing we’re doing here is getting an instance of CosmosClient. After that we’re getting an instance of Database class using database() method of client. Finally we’re calling readAll() method on the containers property of the database.

Using Promise

And here’s the code to use if you were to use promise:

const listContainersPromise = () => {
  return new Promise((resolve, reject) => {
    const client = getClient();
    const database = client.database(databaseId);
    database.containers.readAll().toArray()
    .then((containersListingResult) => {
      const containers = [];
      containersListingResult.result.forEach((item) => {
        containers.push(item);
      });
      resolve(containers);
    })
    .catch((error) => {
      reject(error);
    });
  });
};

Create Container

Next, let’s see how we can create a container. Let’s say we want to create a container by the name “myContainer”.

Using Async/Await

Here’s the code to create a container using async/await:

const createContainerAsync = async (containerId) => {
  const client = getClient();
  const database = client.database(databaseId);
  const containerDefinition = {
    id: containerId
  };
  const result = await database.containers.create(containerDefinition);
  return result;
};

What we’re doing here is getting an instance of CosmosClient. After that we’re getting an instance of Database class using database() method of client. Finally we’re calling create() method on the containers property of the database and passing an object of type ContainerDefinition to that method.

Once this code runs successfully, we will have a container in out database. Throughput of that container will be minimum allowed by Cosmos DB (400 RU/s currently), with default indexing policies, no document time-to-live (TTL). The maximum size of the newly created container will be 10 GB because we didn’t specify any PartitionKey for our container. We will talk about creating a partitioned container little below.

The output of this method is an object that has following key members:

  • body: This contains the system properties of the container like _rid, _self, _etag, _ts etc.
  • headers: This contains the response headers.
  • container: This actually is an instance of Container class. You will need to use this object if you want to perform any operation on the container like reading its’ properties, deleting etc.

Please note that the method above will fail if a container by the same name already exists in the account.

To fix this, we simply have to use createIfNotExists method. It’s that simple! Here’s the code to do so:

const createContainerIfNotExistsAsync = async (containerId) => {
  const client = getClient();
  const database = client.database(databaseId);
  const containerDefinition = {
    id: containerId
  };
  const result = await database.containers.createIfNotExists(containerDefinition);
  return result;
};

Using Promise

And here’s the code to do so if you were to use promise:

const createContainerPromise = (containerId) => {
  return new Promise((resolve, reject) => {
    const client = getClient();
    const database = client.database(databaseId);
    const containerDefinition = {
      id: containerId
    };
    database.containers.create(containerDefinition)
    .then((result) => {
      resolve(result);
    })
    .catch((error) => {
      reject(error);
    });
  });
}

const createContainerIfNotExistsPromise = (containerId) => {
  return new Promise((resolve, reject) => {
    const client = getClient();
    const database = client.database(databaseId);
    const containerDefinition = {
      id: containerId
    };
    database.containers.createIfNotExists(containerDefinition)
    .then((result) => {
      resolve(result);
    })
    .catch((error) => {
      reject(error);
    });
  });
}

Create Container with Provisioned Throughput

In the previous example, we created a container but did not specify any throughput. Let’s see how we can create a container with provisioned throughput.

Using Async/Await

Here’s the code to create a container with provisioned throughput using async/await:

const createContainerWithProvisionedThroughputAsync = async (containerId, throughput) => {
  const client = getClient();
  const database = client.database(databaseId);
  const containerDefinition = {
    id: containerId
  };
  const requestOptions = {
    offerThroughput: throughput
  };
  const result = await database.containers.create(containerDefinition, requestOptions);
  return result;
};

If you look at the code above, you will notice that it is very similar to the code we used for creating a container without provisioned throughput. To create a container with provisioned throughput, all we have to define a requestOptionobject and specify the throughput we want as value for offerThroughput property. It’s really that simple!

Please note that at the time of writing this post, the minimum throughput that a container can have is 400 RU/s and the maximum that you can set programmatically is 100000 RU/s. For throughputs beyond this limit, you would need to call Azure support.

Create “Partitioned” Container

Now let’s see how we can create a partitioned container. However, before we do that let’s take a moment to understand what a partitioned container is.

About Partitioned Container

When you create a partitioned container, data in that container is partitioned (or sharded) based on a user-defined attribute (known as PartitionKey). In other words, in a partitioned container all documents having the same PartitionKey value are grouped together and placed in something called a “Logical Partition“.

A few things to know about a partitioned container:

  1. No Size Limit: While a container without partition can be of a maximum of 10 GB in size, there’s no such restriction on the size of a partitioned container. It has unlimited size.
  2. 10 GB / logical partition: While there’s no limit on the total size of a partitioned container, there is a hard limit of 10 GB per logical partition. In other words, each logical partition can have a maximum of 10 GB storage available to you for storing data and index.
  3. PartitionKey is immutable: Once a partitioned container is created you can’t change the PartitionKey for that container. You’ll need to migrate the data from one container to another (with a different PartitionKey). Similarly you can’t convert a non-partitioned container to a partitioned container and vice-versa.
  4. Throughput is evenly divided amongst logical partitions: Whatever throughput you have defined at the container level gets equally divided amongst logical partitions.

Because of #2 and #3 above, it becomes really important that you choose PartitionKey very wisely.

To learn more about partitioning, please visit this link: https://docs.microsoft.com/en-us/azure/cosmos-db/partitioning-overview.

Defining PartitionKey

PartitionKey property for a collection actually refers to a path in a document. For example, consider the following JSON document:

{
  "id": "1234567890",
  "firstName": "John",
  "lastName": "Smith",
  "address": {
    "street": "123 Main Road",
    "city": "Any Town",
    "state": "WY",
    "zipCode": "11111",
    "location": {
      "type": "Point",
      "coordinates": [31.9, -4.8]
    }
  },
  "phone": "XXX-XXX-XXXX",
  "email": "john.smith@something.com",
  "ssn": "XXX-XXX-XXXX"
}

Now if you want to partition on “lastName” attribute, the PartitionKey for the collection would be /lastName. Similarly if you wish to partition on “zipCode” attribute, the PartitionKey for the collection would be /address/zipCode.

To create a partition collection, you simply need to provide partitionKey property in your container definition. If we take code for creating a container from above and assuming we want the PartitionKey for the collection to be /address/zipCode, this is what container definition would look like:

  const containerDefinition = {
    id: containerId,
    partitionKey: {
      paths: ['/address/zipCode']
    }
  };

Rest of the code would remain the same. Please note that even though the partitionKey attribute’s path property is an array, it can only contain one element.

Using Async/Await

Here’s the code to create a partitioned container using async/await:

const createContainerWithPartitionKeyAsync = async (containerId, partitionKeyPath) => {
  const client = getClient();
  const database = client.database(databaseId);
  const partitionKeyDefinition = {
    paths: [partitionKeyPath]
  }
  const containerDefinition = {
    id: containerId,
    partitionKey: partitionKeyDefinition
  };
  const result = await database.containers.create(containerDefinition);
  return result;
}

Using Promise

And here’s the code to do so if you were to use promise:

const createContainerWithPartitionKeyPromise = (containerId, partitionKeyPath) => {
  return new Promise((resolve, reject) => {
    const client = getClient();
    const database = client.database(databaseId);
    const partitionKeyDefinition = {
      paths: [partitionKeyPath]
    }
    const containerDefinition = {
      id: containerId,
      partitionKey: partitionKeyDefinition
    };
    database.containers.create(containerDefinition)
    .then((result) => {
      resolve(result);
    })
    .catch((error) => {
      reject(error);
    });
  });
}

Create Container with Custom Indexing Policy

Next, let’s see how we can create a container with custom indexing policy. However before we do that, let’s take a moment and talk about indexing.

Indexing

When you create a container, by default all attributes of all items in that container are indexed. Cosmos DB engine constantly updates the index as the items are added/modified/deleted in that container.

However, there are times when you want to override this default behavior. Consider these scenarios for example.

  • Let’s say you’re performing a bulk insert in a container and don’t want Cosmos DB engine to index the data as it gets inserted. Instead you may want to index the data once the bulk insert operation has finished. In this case, you may want to turn off indexing while bulk insert operation is running. This can be accomplished by changing the indexing mode.
  • Consider a scenario where you know for sure that you will never query on a particular attribute. In that case you may not want that particular attribute to be indexed. This can be accomplished by including this attribute’s path under “excluded paths” to instruct Cosmos DB engine to not index that particular attribute.
  • Reverse of above scenario is where you know for sure that you will only query on certain attributes. In that case you only want to index those attributes. This can be accomplished by including paths of those attributes under “included paths” to instruct Cosmos DB engine to index only those attributes.

Luckily for us, Cosmos DB has made defining an indexing policy very simple. An indexing policy consists of 3 things:

  1. Indexing Type: It could be one of the 3 values – Consistent, Lazy or None. It tells Cosmos DB engine how the data should be indexed. In case of “Consistent”, data is indexed continuously, in case of “Lazy”, data is indexed whenever the container is free and lastly in case of “None”, the data is not indexed.
  2. Included Paths: These are the paths which must be indexed. When defining an included path, you must also tell the index kind (Hash or Range for String and Number data type, Spatial for GeoJson data type) and data type (one of the following – Number, String, Point, LineString, Polygon, and MultiPolygon).
  3. Excluded Paths: These are the paths that must not be indexed.

To learn more about indexing, please visit this link: https://docs.microsoft.com/en-us/azure/cosmos-db/index-policy

Here’s a sample indexing policy definition:

{
  indexingMode: IndexingMode.consistent,
  includedPaths: [{
    path: '/address/location/?',
    indexes: [{
      kind: IndexKind.Spatial,
      dataType: DataType.MultiPolygon
    }]
  }],
  excludedPaths: [{
    path: '/*'
  }]
}

What we are doing here is telling Cosmos DB engine that we want “Consistent” indexing mode and only want to index “/address/location” path and exclude everything else.

Now let’s see the code.

Using Async/Await

Here’s the code to create a container with a custom indexing policy using async/await:

const createContainerWithIndexingPolicyAsync = async (containerId, indexingPolicy) => {
  const client = getClient();
  const database = client.database(databaseId);
  const containerDefinition = {
    id: containerId,
    indexingPolicy: indexingPolicy
  };
  const result = await database.containers.create(containerDefinition);
  return result;
};

What we’re doing here is providing a value for indexingPolicy attribute in container definition.

Using Promise

And here’s the code if you were to use promise:

const createContainerWithIndexingPolicyPromise = (containerId, indexingPolicy) => {
  return new Promise((resolve, reject) => {
    const client = getClient();
    const database = client.database(databaseId);
    const containerDefinition = {
      id: containerId,
      indexingPolicy: indexingPolicy
    };
    database.containers.create(containerDefinition)
    .then((result) => {
      resolve(result);
    })
    .catch((error) => {
      reject(error);
    });
  });
};

Create Container with Unique Key Policy

Next, let’s see how we can create a container with unique key policy. Again, let’s first understand what unique keys are.

Unique Key Policy

To understand unique key policy, let’s consider this item:

{
  "id": "1234567890",
  "firstName": "John",
  "lastName": "Smith",
  "address": {
    "street": "123 Main Road",
    "city": "Columbia",
    "state": "MD",
    "zipCode": "21045",
    "location": {
      "type": "Point",
      "coordinates": [31.9, -4.8]
    }
  },
  "phone": "XXX-XXX-XXXX",
  "email": "john.smith@something.com",
  "ssn": "XXX-XXX-XXXX"
}

Now let’s say what you want is that there can only be a single record for each “ssn”. A unique key policy lets you achieve just that where Cosmos DB engine ensures that if an item exists with a “ssn”, a new item can’t be created with the same “ssn”.

Furthermore, let’s say we want to restrict that no duplicate combination of “street”, “city”, “state” and “zipCode” is inserted. Again you can define a unique key policy to accomplish that.

Here’s how our unique key policy would look like that enforces both of the situations above:

{
  uniqueKeys: [
    { paths: ['/ssn'] },
    { paths: ['/address/street', '/address/city', '/address/state', '/address/zipCode', ] }
  ]
}

Please note that a unique key policy can only be defined at the time of creating a container and can’t be changed later on.

To learn more about unique keys, please visit this link: https://docs.microsoft.com/en-us/azure/cosmos-db/unique-keys

Now let’s take a look at the code.

Using Async/Await

Here’s the code to create a container with unique key policy using async/await:

const createContainerWithUniqueKeyPolicyAsync = async (containerId, uniqueKeyPolicy) => {
  const client = getClient();
  const database = client.database(databaseId);
  const containerDefinition = {
    id: containerId,
    uniqueKeyPolicy: uniqueKeyPolicy
  };
  const result = await database.containers.create(containerDefinition);
  return result;
};

What we’re doing here is providing a value for uniqueKeyPolicy attribute in container definition.

Using Promise

And here’s the code if you were to use promise:

const createContainerWithUniqueKeyPolicyPromise = (containerId, uniqueKeyPolicy) => {
  return new Promise((resolve, reject) => {
    const client = getClient();
    const database = client.database(databaseId);
    const containerDefinition = {
      id: containerId,
      uniqueKeyPolicy: uniqueKeyPolicy
    };
    database.containers.create(containerDefinition)
    .then((result) => {
      resolve(result);
    })
    .catch((error) => {
      reject(error);
    });
  });
};

Create Container with Default Time-To-Live (TTL)

Lastly, let’s see how we can create a container with default time-to-live (TTL). Again, let’s first understand TTL.

What is TTL

Let’s assume that you’re storing application logs in a container and what you would want is these logs to delete automatically after 30 days. This is what TTL does exactly.

By defining proper TTL, you can instruct Cosmos DB engine to delete the items from a container after a certain amount of time.

When defining a TTL for a container, there can be 3 possible values:

  1. Undefined: If you don’t define any TTL on the container, then the items remain in container till the time you delete them.
  2. -1: When you set the TTL on the container as -1, then essentially you’re telling Cosmos DB engine that you’ll define TTL on items. If no TTL is defined on an item, that item will remain in the container till the time you delete them.
  3. Positive Integer Value: When you set TTL value as a positive integer value, the document will automatically delete after those many seconds. For example, if you set the TTL as 600 on the container then the item will be deleted automatically from the container after 600 seconds (or 10 minutes).

Now let’s take a look at the code.

Using Async/Await

Here’s the code to create a container with TTL using async/await:

const createContainerWithTtlAsync = async (containerId, ttl) => {
  const client = getClient();
  const database = client.database(databaseId);
  const containerDefinition = {
    id: containerId,
    defaultTtl: ttl
  };
  const result = await database.containers.create(containerDefinition);
  return result;
};

What we’re doing here is providing a value for defaultTtlattribute in container definition.

Using Promise

And here’s the code if you were to use promise:

const createContainerWithTtlPromise = (containerId, ttl) => {
  return new Promise((resolve, reject) => {
    const client = getClient();
    const database = client.database(databaseId);
    const containerDefinition = {
      id: containerId,
      defaultTtl: ttl
    };
    database.containers.create(containerDefinition)
    .then((result) => {
      resolve(result);
    })
    .catch((error) => {
      reject(error);
    });
  });
};

Get Container Properties

Now let’s see how we can read container properties.

Using Async/Await

Here’s the code to get properties of a container using async/await:

const getContainerPropertiesAsync = async (containerId) => {
  const client = getClient();
  const database = client.database(databaseId);
  const container = database.container(containerId);
  const result = await container.read();
  return result;
};

What we’re doing here is getting an instance of CosmosClient. After that we’re getting an instance of Database class using database() method of client. Then we’re creating an instance of Container class using container() method of database. Finally we’re calling read() method on that container object to get the properties of that container.

The output of this method is an object that has following key members:

  • body: This contains the system properties of the container like _rid, _self, _etag, _ts etc.
  • headers: This contains the response headers.
  • container: This actually is an instance of Container class.

Using Promise

And here’s the code to do so if you were to use promise:

const getContainerPropertiesPromise = (containerId) => {
  return new Promise((resolve, reject) => {
    const client = getClient();
    const database = client.database(databaseId);
    const container = database.container(containerId);
    container.read()
    .then((result) => {
      resolve(result)
    })
    .catch((error) => {
      reject(error);
    });
  });
};

Update Container

Now let’s see how we can update a container. Please note that when updating a container, not all properties of that container can be updated. For example, you can’t update a container’s PartitionKey.

Properties that you could update in a container are:

  • Indexing policy: You can change the indexing policy on a container.
  • Default TTL: You can change the default TTL on a container.

An update operation on a container is a “replace” operation. What that means is that you need full container definition and then change the properties you wish to change and then save the container with updated container definition.

Let’s take a look at the code now.

Using Async/Await

Here’s the code to update a container’s indexing policy using async/await:

const updateContainerWithIndexingPolicyAsync = async (containerId, newIndexingPolicy) => {
  const client = getClient();
  const database = client.database(databaseId);
  const container = database.container(containerId);
  const getContainerPropertiesResult = await container.read();
  const containerDefinition = getContainerPropertiesResult.body;
  containerDefinition.indexingPolicy = newIndexingPolicy;
  const result = await container.replace(containerDefinition);
  return result;
};

And here’s the code to update a container’s TTL using async/await:

const updateContainerWithTtlAsync = async (containerId, newTtl) => {
  const client = getClient();
  const database = client.database(databaseId);
  const container = database.container(containerId);
  const getContainerPropertiesResult = await container.read();
  const containerDefinition = getContainerPropertiesResult.body;
  containerDefinition.defaultTtl = newTtl;
  const result = await container.replace(containerDefinition);
  return result;
};

What we’re doing here is getting an instance of CosmosClient. After that we’re getting an instance of Database class using database() method of client. Then we’re creating an instance of Container class using container() method of database. Then we’re calling read() method on that container object to get the properties of that container.

Once we have the properties of the container, we extract the container definition from body property of the read method result and change the desired property. So now we have updated container definition. We then call replace() method on the container object to update the container.

Using Promise

And here’s the code to do so if you were to use promise:

const updateContainerWithIndexingPolicyPromise = (containerId, newIndexingPolicy) => {
  return new Promise((resolve, reject) => {
    const client = getClient();
    const database = client.database(databaseId);
    const container = database.container(containerId);
    container.read()
    .then((result) => {
      const containerDefinition = result.body;
      containerDefinition.indexingPolicy = newIndexingPolicy;
      container.replace(containerDefinition)
      .then((result) => {
        resolve(result);
      })
      .catch((error) => {
        reject(error);
      });
    })
    .catch((error) => {
      reject(error);
    });
  });
};
const updateContainerWithTtlPromise = (containerId, newTtl) => {
  return new Promise((resolve, reject) => {
    const client = getClient();
    const database = client.database(databaseId);
    const container = database.container(containerId);
    container.read()
    .then((result) => {
      const containerDefinition = result.body;
      containerDefinition.defaultTtl = newTtl;
      container.replace(containerDefinition)
      .then((result) => {
        resolve(result);
      })
      .catch((error) => {
        reject(error);
      });
    })
    .catch((error) => {
      reject(error);
    });
  });
};

Delete Container

Next, let’s see how we can delete a container.

Using Async/Await

Here’s the code to delete a container using async/await:

const deleteContainerAsync = async (containerId) => {
  const client = getClient();
  const database = client.database(databaseId);
  const container = database.container(containerId);
  await container.delete();
  return true;
};

What we’re doing here is getting an instance of CosmosClient. After that we’re getting an instance of Database class using database() method of client. Then we’re creating an instance of Container class using container() method of database. Finally we’re calling delete()method on that container object to delete that container.

Using Promise

And here’s the code to do so if you were to use promise:

const deleteContainerPromise = (containerId) => {
  return new Promise((resolve, reject) => {
    const client = getClient();
    const database = client.database(databaseId);
    const container = database.container(containerId);
    container.delete()
    .then(() => {
      resolve(true)
    })
    .catch((error) => {
      reject(error);
    });
  });
};

Get Container Throughput

Next, let’s see how we can find the throughput we have defined on a container.

Considering we defined throughput on the container, we would expect a property of that container that would tell us about this but that’s not the case :).

To get the throughput on a container, one would actually need to make use of Offers API. Let’s see how that works.

Using Async/Await

Here’s the code to get an offer on a container using async/await:

const getContainerOfferAsync = async (containerId) => {
  const client = getClient();
  const database = client.database(databaseId);
  const container = database.container(containerId);
  const getContainerPropertiesResult = await container.read();
  const containerProperties = getContainerPropertiesResult.body;
  const selfLink = containerProperties._self;
  const querySpec = {
    query: 'SELECT * FROM root r WHERE  r.resource = @link',
    parameters: [{
        name: '@link',
        value: selfLink
    }]
  };
  const offerListingResult = await client.offers.query(querySpec).toArray();
  const offer = offerListingResult.result[0];
  return offer;
};

What we’re doing here is getting an instance of CosmosClient. After that we’re getting an instance of Database class using database() method of client. Then we’re creating an instance of Container class using container() method of database. Then we’re calling read() method on that container object to get the properties of that container.

In order to get an offer on a resource (container in this case), we would need the selfLink property of that resource. And that’s what we’re doing in the code above. Then we call query method in Offers class to get the offer available on our container.

{ 
    resource: 'dbs/dEwOAA==/colls/dAp1CC==',
    offerType: 'Invalid',
    offerResourceId: 'dAp1CC==',
    offerVersion: 'V2',
    content:
    { 
        offerThroughput: 400,
        offerIsRUPerMinuteThroughputEnabled: false 
    },
    id: 'BlZx',
    _rid: 'BlZx',
    _self: 'offers/BlZx/',
    _etag: '"00005200-0000-0100-0000-5cf5ee200000"',
    _ts: 1559621152 
}

To get the throughput, simply read the offerThroughtput property of the content property of the result.

Using Promise

And here’s the code to do so if you were to use promise:

const getContainerOfferPromise = (containerId) => {
  return new Promise((resolve, reject) => {
    const client = getClient();
    const database = client.database(databaseId);
    const container = database.container(containerId);
    container.read()
    .then((result) => {
      const containerProperties = result.body;
      const selfLink = containerProperties._self;
      const querySpec = {
        query: 'SELECT * FROM root r WHERE  r.resource = @link',
        parameters: [{
            name: '@link',
            value: selfLink
        }]
      };
      client.offers.query(querySpec).toArray()
      .then((offerListingResult) => {
        const offer = offerListingResult.result[0];
        resolve(offer);
      })
      .catch((error) => {
        reject(error);
      });
    })
    .catch((error) => {
      reject(error);
    });
  });
};

Update Container Throughput

Last thing we will do in this post is see how we can update the throughput on a container.

The way to accomplish this is by getting an offer on the container, update the offerThroughtput property in that offer and then replace the existing offer with the modified one.

Using Async/Await

Here’s the code to update an offer on a container using async/await:

const updateContainerProvisionedThroughputAsync = async (containerId, provisionedThroughput) => {
  const offerDetails = await getContainerOfferAsync(containerId);
  offerDetails.content.offerThroughput = provisionedThroughput;
  const client = getClient();
  const offer = client.offer(offerDetails.id);
  await offer.replace(offerDetails);
};

The code is pretty straightforward. First, we are getting an offer on the container using our getContainerOfferAsync method. Then we’re updating the provisioned throughput in that offer.

Next, we get an instance of CosmosClient. Using that and the offer id, we create an instance of Offer class and finally calling replace method on that class to replace an offer.

Using Promise

And here’s the code to do so if you were to use promise:

const updateContainerProvisionedThroughputPromise = (containerId, provisionedThroughput) => {
  return new Promise((resolve, reject) => {
    getContainerOfferPromise(containerId)
    .then((offerDetails) => {
      offerDetails.content.offerThroughput = provisionedThroughput;
      const client = getClient();
      const offer = client.offer(offerDetails.id);
      offer.replace(offerDetails)
      .then((result) => {
        resolve(result);
      })
      .catch((error) => {
        reject(error);
      });
    })
    .catch((error) => {
      reject(error);
    });
  });
};

Wrapping Up

That’s it for this post! In the next series of posts we will do same thing with documents and other things so stay tuned for that.

If you find any issues with the code samples or any other information in this post, please let me know and I will fix them at the earliest.

Happy Coding!

Cosmos DB and Node SDK – Part I: Working with Databases

It has been a while that I wrote something :). There have been so many things I learnt in last few years and I wanted to share my learnings but one thing or other kept me away. I am hoping that I will be break this and post regularly going … [Continue reading]

Attitude Matters (When It Comes To Winning Users Over)!

Earlier this month we deprecated our flagship product (Azure Management Studio) and replaced that with a brand-new product (Cerulean). You can read more about the deprecation on our website at … [Continue reading]

Oops! I Deleted My Blobs! What Can I Do?

Honestly, Nothing! But that’s before reading this post. After reading this post, you don’t have to worry about a thing. In this post we will talk about Soft Delete functionality recently announced by Azure Storage. This super cool functionality will … [Continue reading]

Zone Redundant Storage (Preview v/s Classic)–Compared & Contrasted

Recently I ran into this blog post by Azure Storage Team: Azure Zone Redundant Storage in public preview. When I saw this post, the first thought that came to my mind was “it must surely be a mistake” , after all Zone Redundant Storage (ZRS) has been … [Continue reading]

Understanding Azure Storage Blob Access Tiers

It has been really-really long time that I have written a blog post. Past year or so has been simply crazy. From acquiring Cerebrata back to building a brand new product (Cerulean) from scratch, things kept me quite busy. In this blog post we will … [Continue reading]

My Thoughts On Current Euphoria Over Startups In India

Nowadays everyone wants to do a startup. There is a general feeling that if they are not doing a startup, they are not doing anything . I was listening to an old song by Amitabh Bachchan and took the liberty of modifying it to reflect the current … [Continue reading]

Azure Storage – Shared Access Signature Enhancements

Over the past few months, Azure Storage Team released two major upgrades. Both of these upgrades involve some really interesting new features and improvements. Among these new features and improvements include changes to Shared Access Signature (SAS) … [Continue reading]

Azure Icon Font For Your Web Application

For both Cloud Portam website and application, when it comes to icons we don’t use images. Almost invariably, we use vector icons and icon fonts (like Font Awesome which BTW are very aptly named; they are indeed really-really awesome :)). Where … [Continue reading]

What’s New In Azure Storage

It’s been a while that I wrote a blog post about Azure Storage :). Earlier this month, Azure Storage Team released a new version of Storage Service and included a lot of awesome goodness! In this blog post, I will try to summarize those. So … [Continue reading]