Generating Excel Charts with MarkLogic

GIVE MY CREATION LIFE!and some fun with Formulas too…

This is another interesting one I see regularly: “How can I generate Excel Charts from MarkLogic Server?”

Charts are actually rendered from DrawingML found in the .xlsx package.  The DrawingML is embedded in SpreadsheetML, which is the Open XML format for Excel 2007/2010.

You don’t want to mess with DrawingML, as it’s a nasty frickin riddle, wrapped in an engima, inside a russian doll style matrix of insanity and pain.

Word, Excel and PowerPoint are producers and consumers of XML.  To some extent and to varying degrees, each of their respective XML formats can be understood and worked with in a relatively straightforward and reasonable way.  Sometimes though, the XML generated by these applications is really just a serialization of their object model and you’ll just waste a ton of time and find yourself in an extremely uncomfortable place (ed-like the back of a Volkswagen?) trying to figure the XML out when you don’t have to.  So let’s leave the DrawingML be. Capisci?

Think about it this way:  A chart in a workbook is tied to certain cell values in a worksheet.  When the cell values update, the chart dynamically updates.  At the end of the day, the DrawingML is just a snapshot of what the chart looked like based on the cell values when the Workbook was saved in Excel. (ed-Pivot tables work similarly in this way, but that’s a post for another day.)

Now let’s say we have a workbook containing a chart.  We know we can save our .xlsx to MarkLogic Server and have it automatically unzipped for us, its component XML parts made immediately available for search and re-use.   We can then update our extracted worksheets in the Server using XQuery.  Finally, we can re-zip the extracted workbook components back up and open the updated .xlsx into Excel.  Excel will automatically refresh its chart for us when it consumes the XML so we see the latest visualization of our chart based on the information we added to the worksheets.

5 Steps to Chart Freedom

Step 1

Create your chart in a workbook and drive it off of some cell values.  Note the cells and the name of the worksheet you’re driving your chart from. (example: Sheet1, cells: B2, B3, B4, etc.) I’ve provided a sample .xlsx here.

On Sheet1 we see download counts for a fictional company’s widgets for the month of September. The chart shows downloads for the widgets Foo, Bar, and Wumpus.  The chart columns correspond to cells B2, B3, and B4.

On Sheet2 we see sales counts for a fictional company’s widgets for each salesperson. The chart shows total sales for each salesperson.  The chart sections correspond to cells E2, E3, and E4.  Look closer and you’ll see that the cell values in column E driving the chart are actually the result of formulas; they are SUMs of all widgets for each salesperson row.  Note that the cells B6, C6, D6, and E6 all contain SUM formulas for their respective columns as well.

Step 2

Enable the Office OpenXML Extract and Status Change Handling CPF pipelines for your MarkLogic database so the .xlsx will automatically be unzipped when ingested into MarkLogic and its component parts made available for update.  Also insure you have the URI Lexicon enabled for your database. An example how to set this up can be found here.

Step 3

Save your .xlsx to MarkLogic. Once saved, the .xlsx is unzipped, and we can now manipulate it’s extracted XML component parts directly.  The idea is to save workbooks containing your charts as templates within MarkLogic and then update the extracted worksheet parts based on new information being saved to your database.

Step 4

Use the XQuery API that comes with the MarkLogic Toolkit for Excel to set the cell values for your chart in the extracted worksheet.  In particular, look at the function excel:set-cells() for updating worksheets.  Evaluate the following in CQ.

Note: you may need to update the code samples below to reflect your workbook and where you saved it in MarkLogic.

xquery version “1.0-ml”;

import module namespace excel=”http://marklogic.com/openxml/excel” at “/MarkLogic/openxml/spreadsheet-ml-support.xqy”;

let $doc1 := “/MySpreadsheet_xlsx_parts/xl/worksheets/sheet1.xml”
let $doc2 := “/MySpreadsheet_xlsx_parts/xl/worksheets/sheet2.xml”
let $sheet1:= fn:doc($doc1)/node()
let $sheet2 := fn:doc($doc2)/node()

let $cell1 := excel:cell(“B2”,120)
let $cell2 := excel:cell(“B3”,99)
let $cell3 := excel:cell(“B4”,456)

let $cell4 := excel:cell(“D3”,127)
let $cell5:= excel:cell(“E3″,(),”SUM(B3:D3)”)

return (xdmp:document-insert($doc1, excel:set-cells($sheet1,($cell1, $cell2, $cell3))),
                xdmp:document-insert($doc2, excel:set-cells($sheet2,($cell4, $cell5))))

In the code above, for Sheet1, we see that we use the excel:cell() constructor to create cells for B2, B3, and B4.  We set the values for these cells to new numbers. These numbers could be coming from the results of another query.  We update the worksheet, using excel:set-cells(), passing the function the sheet we want to update, as well as a sequence of cells we’d like added and/or updated on the referenced sheet.  Finally, we xdmp:document-insert() our updated document, overwriting the existing one with our updated worksheet.  Remember, Sheet1 just held the simple chart driven directly from the cell values.

With Sheet2, we again use excel:cell() to create cells for D3 and E3. Sheet2 is more interesting as the chart here is driven from cells that contain formulas. For E3, we create a cell using excel:cell(), setting the value of the cell to the empty sequence, () , and passing in the formula for the cell.  Again we excel:set-cells() to update our worksheet and xdmp:document-insert() to save our updated worksheet back to the Server.

Note on excel:cell(): This function creates a new cell, so if you wish to retain an existing formula for a cell before you update it in a worksheet, you can’t use the 2 argument excel:cell() function.  If you did that, you’d lose the formula for the cell in the worksheet when you overwrite the XML.  You must create the cell with the formula, as we did above for E3.  If this doesn’t work for you, you can always roll your own XQuery to update the cell values for worksheets containing formulas in a different way.

Note on Excel formulas: Unlike charts, cells containing formulas will not calculate and refresh automatically when you open the updated worksheet in Excel if those cells already contain values. The value of the cell within the XML for the worksheet is considered the cached value by Excel and will be displayed when the workbook is opened.  This is done for performance reasons, so formula heavy worksheets don’t take forever to open as they calculate the value for every cell containing a formula when a workbook is opened.  Formula calculation is postponed to avoid wait time when opening a workbook.  As a result of this though, you can create XML for a worksheet that when consumed by Excel, will result in a cell displaying the wrong results given its formula.

To get a formula to calculate the value for a cell when you open a workbook in Excel and insure the correct cell value is displayed, you need to set the value of the cell to nothing.  You can do this using excel:cell(), setting the value of the cell to the empty sequence: ().

For more information on the excel:* functions,  check out the XQuery API docs that come with the Toolkit for Excel.  There are a lot of functions available, all documented and with examples of usage.

Step 5

Zip up the updated .xlsx from it’s extracted component parts and open into Excel.  When you do this, it doesn’t matter what the DrawingML is.  Excel reads the cell values when it consumes the XML and will update the chart automatically.  The next time you save the workbook, the DrawingML is updated to reflect what the chart looks like based on the latest cell values. Evaluate the following in CQ.

xquery version “1.0-ml”;

let $directory := “/MySpreadsheet_xlsx_parts/”
let $uris := cts:uris(“”,”document”,cts:directory-query($directory,”infinity”))
let $parts := for $i in $uris let $x := fn:doc($i) return  $x

let $manifest := <parts xmlns=”xdmp:zip”>
                         {
                              for $i in $uris
                              let $dir := fn:substring-after($i,$directory)
                              let $part :=  <part>{$dir}</part>
                              return $part
                          }
                         </parts>

let $xlsx := xdmp:zip-create($manifest, $parts)
return xdmp:save(“C:\MyUpdatedSheet.xlsx”,$xlsx)

Open MyUpdatedSheet.xlsx into Excel.

BooYaa!  We update a few cells on Sheet1, and our chart automatically updates for us when we open the .xlsx into Excel.

Now take a look at Sheet2.  We updated D3 and set the value of E3 to (). Subsequently, the formula in E3 calculated its SUM formula when the workbook was opened.  Since the chart is driven from E2, E3, and E4, it updated properly as well.  WooHaa!

But take a closer look at cells D6 and E6.  They each contain SUM formulas for their columns, and they’re displaying the wrong values!  (ed-#fail) This is because we didn’t set their values to nothing.  Since the cells contained values in the XML for the worksheet, the cell formulas were not calculated when the workbook was opened and the cached value was displayed.  If you click on each of those cells, you’ll see the formula, click off of them, and the cells will recalculate and update with the correct values.

Bring Excel Workbooks to Life!

There's always another way.So the title was a bit misleading, as we don’t actually generate charts, so much as create the appropriate XML for Excel worksheets so that the Excel application will update and render charts for us when it consumes the XML.   But understanding a little bit of the SpreadsheetML format and how Excel behaves when consuming XML for charts and formulas, the doors open up to some very interesting possibilities.

The above examples are intentionally simple, but think about this…

Instead of a dead .xlsx, that sits lifeless on the filesystem and only alive when opened and active and being manipulated directly in Excel,  workbooks can now stay alive in the Server, constantly updated from complex queries being evaluated as new information is saved to the database.  These workbooks can then be dynamically zipped up when called upon to open in Excel and provide snapshot visualizations and results for a point in time.  Excel will consume the XML and can update charts and calculate formulas when opening this snapshot workbook, while the underlying, extracted workbook lives on and continues to be updated.

But this is just one way to use MarkLogic and Excel together.  There’s always another way…

Advertisements

Dude, Where’s My Worksheet?

An Excel workbook can get “hidden” when you embed it within a PowerPoint.

In Office 2007, its possible to embed an Excel worksheet within a PowerPoint slide. You can do this from within PowerPoint by navigating to the ‘Insert’ tab in the Ribbon and then clicking ‘Object’ in the Text grouping.  The dialog box that appears will allow you to create one of several predefined New objects in your slide, or you can Browse to insert an existing object from a file.  From here we can select an Excel .xlsx we’ve created, and the worksheet will appear within our slide.

This can be very cool and useful if we have some analysis we’d like to present.  Also, whenever we select the inserted worksheet within the slide, it will automatically open in Excel so we can continue to tweak on it using Excel’s functionality.

So embed an Excel document in a PowerPoint (from an existing file). But don’t touch or select or tweak the embedding in the slide, just save the presentation .pptx somewhere. Change the .pptx extension to .zip, unzip the document, and you’ll find the embedded Excel .xlsx under /ppt/embeddings.  Yep, there’s a complete Excel .xlsx package within your PowerPoint .pptx.  Now open the .xlsx directly in Excel by double-clicking it; the Excel application will launch, but NOTHING IS VISIBLE! EVEN THOUGH THE SPREADSHEET IS THERE! EXCEL WILL APPEAR AS IF NO WORKBOOKS ARE OPENED!!!

unhide-small

You have to navigate to ‘View’ on the Ribbon and click ‘Unhide’ to see the document.

What Happened?

If you unzip the .xlsx and examine workbook.xml, you’ll find:

/workbook/bookViews/workbookView/@visibility = “hidden” .

This attribute is set for the embedded document, even though it wasn’t set in the original prior to embedding.

But guess what, if you would’ve tweaked the embedding by selecting it or resizing it in the slide prior to saving the .pptx, this attribute wouldn’t be set. You can open the .xlsx and the workbook IS VISIBLE.  This seems to be an odd quirk of PowerPoint’s default behavior that I stumbled upon while embedding documents programmatically.

I can understand why a behavior like this might go unnoticed if it’s expected the embedded document will only ever be opened within PowerPoint again. In PowerPoint the embedded .xlsx opens and is always visible within the context of the slides.

Embed Shmembed.

The thing is, I’m saving these documents to MarkLogic Server, where I automatically unzip all Office 2007 documents so I can search and reuse the XML, document parts, and original source .pptx, .xlsx, .docx, etc.  in the creation of new Office and other documents.  In MarkLogic, one of the things I’m doing for an app is making embedded Office docs available as independent documents within search results.  It’s freedom baby, yeah! And while we’re seeing this behavior with embedding, we now know that its possible to hide any workbook using that simple attribute.

So if we want our users to be able to view the Excel documents they are selecting for use,  a simple solution is to just remove the @visible and rezip the .xlsx in MarkLogic, either in the saved .xlsx, or when opening the embedded doc from the rezipping of its extracted parts, or both.  Remove that attribute and we’re all good for visibility.

Or,  if you’re feeling particularly ornery,  I guess you could add the attribute to all your organization’s Excel workbooks and watch your users go mad?  Maybe there’s a use-case for that, I don’t know.

But I thought this was interesting so I’d share. Maybe this will help someone who’s tweaking on this random behavior or with AddOLEObject as well.  You can have a .xlsx on your filesystem.  You can unzip and see the workbook and worksheet parts.  You can double-click the .xlsx and it will open in Excel, but nothing is visible, even though the worksheet information *IS* saved in the .xlsx package and available in the Excel application.  It turns out the document is actually open, but hidden.  You just have to ‘Unhide’ it.

Getting started with Open XML, ODF, IDML, and other zipped XML documents in MarkLogic

Microsoft Office 2007, Open Office, and Adobe’s InDesign CS4 all have something in common.  These applications all save their documents as .zip archives of XML files.  As an example, take any .docx, .odt, or .idml file and change the file extension to .zip. Extract the contents and within you’ll find multiple interrelated XML documents.  There are probably other document formats that follow this model as well, but these are the 3 I see questions for on a regular basis. When working with documents of this type, the first thing many people want to do is work with the XML within the zip packages.  Today’s post will demonstrate tools provided by MarkLogic Server for working with these document types, as well as some methods for managing and using the extracted pieces.

To play along at home, install MarkLogic Server, and setup CQ.  The following examples work with the default Documents database and default Docs HTTP Server.  I’ll be using Word 2007 documents, as that’s kinda my thing, but you can use the same code, tools, and methods with the other file formats as well. Just replace the .docx in the samples below with a .odt or .idml file, etc., and everything will just auto-magically work. Ultimately they’re all just zip files of XML and associated resources.  Let’s do this!

The Office OpenXML Extract pipeline

It doesn’t get any easier than this.  If you’re working with Office 2007 documents, you can quickly configure the Server to automatically unzip these files when you save them to your database.  The XML within will be saved in a directory named for the original file, maintaining the naming and directory structure they had within the .zip.  To make this happen, we just need to install content processing and attach the Office OpenXML Extract pipeline.

Install Content Processing

In the Admin UI, navigate to:

Databases -> Documents -> Content Processing

We see a message informing us that Content Processing is not installed. That’s ok.  Click the ‘Install’ tab.

Next we’re presented with an option to ‘enable conversion’.  If you’re running the Community Edition, leave this as false.  Click ‘install’.

Finally, we’re presented with the message ‘Content Processing will be installed for the database test without conversion.’ Click ‘ok’.

Note: There are conversion utilities available that require a separate license.  If you are running a version of the Server other than Community, feel free to install these utilities by setting the enable conversion option to true.  For those on the Community Edition, installing content processing allows us to build our own conversion utilities, as well as take advantage of other available pipelines that don’t require the separate license. w00t!

Now that we’ve installed Content Processing, we just need to attach the Office OpenXML Extract pipeline.

Attach a Content Processing Pipeline

In the Admin UI, navigate to:

Databases -> Documents -> Content Processing -> Domains -> Default Documents -> Pipelines

Make sure that the pipelines ‘Status Change Handling’ and ‘Office OpenXML Extract’ are checked.  Then click ‘ok’.

You’ll now see those pipelines are attached.

pipelines

Let’s take advantage our freshly configured pipeline. Open CQ and evaluate the following to insert a document.  I’m inserting C:\foo.docx.  The default domain for CPF is the root directory “/”, so remember to prefix the name of your file with “/” in the uri option so the document will be processed.

     xquery version "1.0-ml";
     xdmp:document-load("C:\foo.docx",
                          <options xmlns="xdmp:document-load">
                            <uri>/foo.docx</uri>
                          </options>)

To validate the pipeline ran for the document, evaluate the following in CQ:

     xquery version "1.0-ml";
     xdmp:document-properties("/foo.docx"),
     xdmp:document-properties("/foo_docx_parts/word/document.xml")

Notice in the results returned that A) there are CPF properties on the .docx informing us the XML has been extracted, and B) we are looking at the properties for an extracted Part.  The document.xml is in a sibling directory of the original foo.docx named /foo_docx_parts/.  We now have the original zip package and all its extracted pieces available to us in the Server.  With Content Processing installed and our pipeline configured, anytime we save an Office document to the Server in the future they will be automatically unzipped and their parts will be saved similarly for us in a _parts directory.

cqresults

Note: The .docx and the parts directory are linked.  Delete the .docx, and the related _parts folder and its extracted pieces will be deleted as well.

So,  What are pipelines?  What’s Content Processing? What was all that configuration we did?

CPF in a Nutshell ( the Content Processing Framework )

MarkLogic Server includes a framework for processing content that we refer to lovingly as CPF.  CPF stands for the Content Processing Framework.  The gist is this: a document has a lifecycle that starts with creation and advances as users/applications update and modify the document.  CPF provides a way to take action on documents based on where they are in their lifecyle.

A pipeline is an XML document that describes a set of content processing steps. It defines the steps that occur during the processing of documents and defines actions that occur at each step.  These actions can be found in supporting XQuery functions and modules. CPF was built for you to create your own content processing applications, with your own content processing code, and following your own logical and business processes.

If you’d like to know more, you can check out the  Content Processing Framework guide, I’ve also provided a quick intro here as well.

Note: the quick intro linked here was written for MarkLogic Server 3.2. For 4.*.* some minor updates are required.  I’ll revisit and update that post in the future, but if you’re interested, the guide should provide enough info to help you modify the example successfully.

But what if we’re not working with Office 2007 documents?  Or we don’t want to use CPF?  Can we still unzip these documents, extract the individual XML files and insert them into our MarkLogic Database?  Yes.

xdmp:zip utilities

When working with .zip files, you’ll want to take a look at the functions xdmp:zip-create(), xdmp:zip-get(), and xdmp:zip-manifest().

I have a Word document, sampleManuscript.docx, that I’ve saved in the directory C:\test.  I can take a look at the names of the files inside the .docx by evaluating the following in CQ:

     xquery version "1.0-ml";
     xdmp:zip-manifest(xdmp:document-get("C:\test\sampleManuscript.docx"))

MarkLogic provides utilities for working with files on the filesystem, but let’s load our document into the Server.

Note: For the following examples, we want to insure CPF does not process our documents for us. So navigate to pipelines, as we did in our configuration steps above.  Uncheck the Office OpenXML Extract pipeline, and click ‘ok’.  This will detach the pipeline so it will not action the example document below on load.

     xquery version "1.0-ml";
     xdmp:document-load("C:\test\sampleManuscript.docx",
                          <options xmlns="xdmp:document-load">
                             <uri>/myManuscript/sampleManuscript.docx</uri>
                          </options>)

The above returns the empty sequence. You can validate that your document inserted properly by clicking ‘explore’ in CQ.  Or by evaluating the following:

     xquery version "1.0-ml";
     xdmp:document-properties("/myManuscript/sampleManuscript.docx")

At a minimum, you’ll see a last-modified metadata timestamp for the document.  If you’ve enabled content processing for the database as we did above, you will see other cpf:* properties.  Assuming no pipelines are attached, the .docx will be in cpf:state initial and no action has been taken to extract its XML parts.

If we want to access a file within the .docx, assuming we know the name of the piece we want  in the .zip,  we can get it using xdmp:zip-get().  The following returns the XML for the document.xml file located within our .docx package.

     xquery version "1.0-ml";
     xdmp:zip-get(fn:doc("/myManuscript/sampleManuscript.docx"),"word/document.xml")

Instead of having to know the names of individual pieces within a .zip package and/or having to extract individual files each time we want to access a piece of XML, let’s just unzip and extract the pieces from our.docx and insert them into a directory, similar to how CPF did for us.

     xquery version "1.0-ml";
     declare namespace zip="xdmp:zip";

     let $doc := "/myManuscript/sampleManuscript.docx"
     let $directory-uri := "/myManuscript/sampleManuscript_docx_parts/"
     let $zipfile := fn:doc($doc)
     let $manifest := xdmp:zip-manifest($zipfile)
     for $part-name in $manifest/zip:part
         let $options := if ($part-name = "/_rels/.rels") then
                             <options xmlns="xdmp:zip-get">
                               <format>xml</format>
                             </options>
                        else
                            <options xmlns="xdmp:zip-get"/>
         let $part := xdmp:zip-get($zipfile, $part-name, $options)
         let $part-uri := fn:concat($directory-uri, $part-name)
         return xdmp:document-insert($part-uri, $part)

We just loop through the manifest, extract the pieces, and insert into a directory. Here we’re explicitly telling the server to treat .rels as XML. With a little modification and refinement, we can easily take the above and make it into a re-usable module. We could even use this as a starting point for creating our own CPF pipeline action.

Zip it!

zipit

Now that we’ve extracted XML documents from zip files and saved them to the Server, lets zip ’em back up.

When we extracted the pieces for sampleManuscript.docx above, we saved them to a directory named   /sampleManuscript_docx_parts/ and we specified this as a subdirectory of the folder /myManuscript/.

When we insert a document to MarkLogic, we specify the name of the file we are saving with a uri parameter (xs:string).  If this string has “/”s in it, the strings in between the slashes will be treated as directories. So “/” marks the path for our content’s location on the Server.  By default, MarkLogic is configured for automatic directory creation. This is done as many people like using directories to manage their content, and it can be very helpful when loading documents using webDAV.  It’s also very helpful for managing unzipped XML packages, as we want to retain the original structure of the document, so when we zip the pieces back up the document will open successfully it in its respective application.

We saved our extracted documents pieces to directories, but we don’t know what pieces were inserted, or what subdirectories were created.  If we enable the uri lexicon for our database, we can evaluate a query using cts:uris that will provide us the uris for all the extracted pieces.  We can then use this to create a manifest and zip the pieces back up.

Validate Directory Creation is Automatic

In Admin UI, navigate to:

Databases -> Documents

Scroll down until you see the property ‘directory creation‘.  The dropdown selection is set to ‘automatic‘.

For more info on directories, refer to the Admin’s Guide, and Application Developer’s Guide.

Enable URI Lexicons

Scroll up, and a few properties up from ‘directory creation’ you’ll find ‘uri lexicon‘.  Set the enabled option to ‘true’ and click ‘ok’.

Now head back to CQ, and evaluate the following:

     xquery version "1.0-ml";
     let $directory := "/myManuscript/sampleManuscript_docx_parts/"
     return cts:uris("","document",cts:directory-query($directory,"infinity"))

The results are a list of all the XML documents we extracted from sampleManuscript.docx.  We can zip them back up and save locally by evaluating the following:

     xquery version "1.0-ml";
     let $directory := "/myManuscript/sampleManuscript_docx_parts/"
     let $uris := cts:uris("","document",cts:directory-query($directory,"infinity"))

     let $parts := (for $i in $uris let $x := fn:doc($i) return $x)

     let $manifest := <parts xmlns="xdmp:zip">
                      {
                       for $i in $uris
                       let $file := fn:substring-after($i,$directory)
                       let $part :=  <part>{$file}</part>
                       return $part
                      }
                      </parts>

     let $pkg := xdmp:zip-create($manifest, ($parts))
     return xdmp:save("C:\test.docx",$pkg)

Double-click on the saved file to open it in Word.  I’ve named it test.docx to demonstrate we can name it anything we want too. More likely we would’ve used the document’s original name, sampleManuscript.docx.

Most of the time though, we’ll want users to be able to dynamically generate their documents on the fly.  Place the following code in a module named opendocx.xqy.  Place it under the /Docs directory for the Server, which is found in the directory where MarkLogic is installed. On Windows the default is C:\Program Files\MarkLogic\Docs.

     xquery version "1.0-ml";
     let $directory := "/myManuscript/sampleManuscript_docx_parts/"
     let $uris := cts:uris("","document",cts:directory-query($directory,"infinity"))

     let $parts := (for $i in $uris let $x := fn:doc($i) return $x)

     let $manifest := <parts xmlns="xdmp:zip">
                      {
                       for $i in $uris
                       let $file := fn:substring-after($i,$directory)
                       let $part :=  <part>{$file}</part>
                       return $part
                      }
                      </parts>

     let $filename := "test.docx"
     let $pkg := xdmp:zip-create($manifest, ($parts))

     let $disposition := concat("attachment; filename=""",$filename,"""")
     let $x := xdmp:add-response-header("Content-Disposition", $disposition)
     let $x:= xdmp:set-response-content-type("application/vnd.openxmlformats-officedocument.wordprocessingml.document")
     return $pkg

In a browser, navigate to the url http://localhost:8000/opendocx.xqy.

Assuming Office 2007 is installed on your machine, the document opens right up into its respective application.

lorem

We can just open the .docx as well. Save the following under /Docs as opendocx2.xqy.  Update the url in your browser, and the file will again open from the Server into Word.

     xquery version "1.0-ml";
     let $docname := "/myManuscript/sampleManuscript.docx"
     let $pkg := fn:doc($docname)
     let $filename := "test.docx"
     let $disposition := concat("attachment; filename=""",$filename,"""")
     let $x := xdmp:add-response-header("Content-Disposition", $disposition)
     let $x:= xdmp:set-response-content-type("application/vnd.openxmlformats-officedocument.wordprocessingml.document")
     return $pkg

And there you have it!  To the Server and back again!  Kind of like the Hobbit, but you didn’t need Gandalf or a bunch of dwarves to help you make the journey.

That should be more than enough to get you started, and to continue with something super awesome, just ponder this: when you have a solid understanding of the document format you’re working with, you can generate  Word, Excel, PowerPoint, OpenOffice, InDesign4, and any other document type on the Server, and you don’t even need the original application to start with! We can generate these documents dynamically, on-the-fly, and serve ’em up to our users who use these applications, but to us on the Server, it’s just a set of XML and related parts.  Cheers!