Chunk save API

From Sense/Net Wiki
Revision as of 14:01, 14 September 2015 by MiklosToth (Talk | contribs) (Copy from stream - from version 6.4 Patch 2)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search
  • 100%
  • 6.2.1
  • Enterprise
  • Community
  • Planned


Upload action
Sense/Net ECMS Content Repository is able to store lots of documents. Uploading files to the repository from a web browser or a third party application can be done using the Upload action that is able to handle even huge files. However when you as a developer want to manage large files in your custom solution (e.g. in a workflow that creates large files), it is advisable to use the built-in chunk save API of Sense/Net to save files in chunks instead of keeping everything in memory. This lets you keep the memory footprint of your application low.


The chunk save API was designed to make saving files into the Content Repository as easy as possible without consuming too much memory. The idea is that the process consists of three parts:

  • starting the operation
  • writing chunks
  • committing the changes

The following sections describe the steps above and the way you can use the chunk save API.

Start chunk save

Before you start a chunk save operation you need to make sure the content already exists and it is saved in a way that states that a long-running operation is about to begin. For details about this special saving method please visit the Multistep saving article.

var myContent = Content.CreateNew("File", parentFolder); // or Content.Load(path);

Now you may start the chunk save operation by calling the BinaryData.StartChunk method. This initializes a new chunk saving operation and returns with a token that you will need to pass to the subsequent chunk save API calls without modification.

var chunkToken = BinaryData.StartChunk(myContent.Id);

Saving chunks

From now on you are able to write the file chunks into the database. You will need the parts of the file in a byte array that you can pass to the chunk save API. You will have to provide the offset every time that tells the underlying system where to write the chunks. The summarized length of all the chunks must be equal to the fileFullLength provided below.

Starting with version 6.3.1 Patch 4 it is possible to write chunks to the database in an unordered way, even in parallel threads to speed things up. In previous versions file chunks must be provided from the beginning to the end sequentially.

while (bytesLeft)
	BinaryData.WriteChunk(contentId, chunkToken, fileFullLength, chunkData, chunkStart);

During this saving operation no other user can access the partial content of the file. It will be accessible only after the commit operation in the next step.

Committing the changes

After the last chunk is written you need to commit the changes. Calling the CommitChunk method tells the system to finalize the binary content and finish the chunk saving operation. The content will remain locked by the current user; you will have to close the multistep save operation manually by calling the FinalizeContent method. If you want to specify some metadata for the binary (e.g. file name or content MIME type) you can do so by filling the optional binary metadata parameter.

var bd = new BinaryData();
bd.FileName = new BinaryFileName(fileName);
BinaryData.CommitChunk(contentId, chunkToken, fileFullLength, binaryMetadata: bd);
var mycontent = Content.Load(contentId); //load the content again after committing the operation to have the latest version

Interrupted saving operations

If the above operation was interrupted, the database contains a partially saved file that is not accessible to anybody. You may delete that 'staging' binary row by calling an UndoCheckout method on the content or deleting the content itself. It is possible to continue (resume) a previously interrupted chunk save operation, but you will need the exact same chunk token that was returned by the StartChunk method in the first attempt. You will also need the exact same position where you left. If you continue calling the WriteChunk method with these parameters and the missing chunks, you will be able to continue the process. If you call the StartChunk method on the content, the previous staging binary value will be deleted and a new empty row will be created for the binary again.

Copy from stream - from version 6.5

From this version it is possible to save large files on the server side without having to worry about chunking: there is a single method for saving a content binary from a stream that handles data chunking for you in the background.

BinaryData.CopyFromStream(contentId, stream);

Related links


There are no external references for this article.