Författararkiv: Erik Kronberg

Rare bug in SharePoint with IE9 when account names have commas

Configured a certain way, Active Directory will generate account names like so: ”Lastname, Firstname”. Although SharePoint strongly recommends you not to use account names with special characters, it seems to mostly be able to handle this.

Now, user links generated with SharePoint’s renderUserField look something like this:

Which works perfectly fine. However, with commas in the account name, the account name part of the link will instead look like this: ”turing%2C%20alan”. URL encoded comma is %2C and space is %20. So far, so good. Try it in Firefox, Chrome, IE10+, no problems. Try it in IE8, works great. But not IE9. Clicking the link in IE9 sends you to a url containing ”turing%252C%20alan”. Spot the error? %252C looks a lot like double encoding. And it is. But why is this only a problem in IE9? I worked my way through many suspects, including encodeURI and escapeProperly/unescapeProperly (yes, these do exist in SharePoint). Nothing came up. And the odd thing is that the URL in the link I click doesn’t match the URL I end up at. This because of the onClick handler, of course!

After a lot more work, I find out why this bug is only active in IE9. There’s a piece of code, triggered by the click handler, that decodes the URL encoded link, does some stuff with it, and encodes it again. This code is blocked by two conditions. It is only reached if the MDS is on and the browser is Internet Explorer 9 or less. MDS doesn’t work in IE8. That only leaves IE9.

But why the double encoding? The thing is, encodeURI will not touch commas, they don’t need encoding. Try it out in your console:

And of course, if you encode something and then decode it the same way, you should get the original string back, right? What if you use encodeURIComponent to encode, and encodeURI to decode? Bad things happen. Because encodeURIComponent _does_ encode commas. So lets say you know a certain string is encoded already, and you want to decode it, do some stuff with it and encode it again. You might write this:

Try that code with this URL: ”http://example.com/turing%2C%20alan/”. You should get ”http://example.com/turing%252C%20alan/”. Mystery solved!

But that just raises another question, really. Why was the comma urlencoded in the first place? The culprit can be found in SharePoint’s ”init.js”.

Back to the onClick handler. It calls a function called GoToLinkOrDialogNewWindow, which uses the href of the a tag you clicked and, amongst other things, creates a URI object from it. URI is a type of object defined in ”init.js”, and has lots of nifty functionality. It automatically encodes its content too, and when you call getString on the object, you get a nicely formatted URL string. Except, of course, when there’s a comma in it. You see, URI doesn’t use encodeURI. Instead, it breaks the link up into parts, a bit like this: http-:-//-example.com-/turing, alan/ (using – as a delimiter). It then fastidiously encodes each component, of course, using encodeURIComponent.

So who’s wrong? No one, really. The problem is that one part of SharePoint assumes a URL to be encoded using encodeURI. Another part of SharePoint encodes the URL by looping over parts of it and calling encodeURIComponent. Communication breakdown.

Complete walkthrough on Composed Looks in SharePoint 2013

In SharePoint 2013 the creation and application of themes has been revamped with the intent of making it easier to use, as well as more powerful (now including optional fonts and images). The new concept added is the Composed Look, a combination of a master page, color scheme and optional fonts, as well as an image. Using the design manager even a non-technical end user could create a complete design package, ready to deploy. What’s even better, you can mix and match parts, taking color scheme from one theme and the font from another, all from a web based wizard.

Although there is some material written already, much of it on Elio Struyf’s blog, I still struggled to make the various code snippets I found work together. With this article on SharePoint theming I hope to add to the available documentation and examples, as well as giving a light tutorial on creating a composed look to be used with a custom master page which includes custom elements and CSS.

To use the examples I give in this article you need to have the SharePoint Server Publishing Infrastructure (this in turn requires SharePoint Server, not Foundation) Site Collection feature activated, as well as the SharePoint Server Publishing Site feature.

SPColor

SharePoint 2013 color schemes are defined in the .spcolor file type. The easiest way to get started is grabbing an existing palette from the hive, in my case found at

Alternatively you can grab one from /_catalogs/theme/15/ on a running site. Rename it to your wishes and take a look inside. It’s a pretty straight forward XML file, where each color is defined using hexadecimal. It does not handle RGB, and has an uncommon format for defining transparency. To create an 80% transparent black background (black is #000000) you would use CC000000. The two first characters define the transparency, the rest is a regular hexadecimal color.

Once you have your .spcolor edited to your liking, you’ll want to check out its effects on a live page. The fastest way would be to upload the .spcolor file to /_catalogs/theme/15 library. After this, it should be available as a color scheme in the Design Manager.

SPFont

Just like with the SPColor file, we can define our own fonts using the .spfont file type. It can be found in the same place as the .spcolor files on both hive and live sites. The XML is slightly more involved than the color scheme one, but far from complicated.

Looking at the existing font scheme files, you’ll see that it contains a list of fontSlots, where each fontSlot is given a name. The fontSlots in turn contain s:latin, s:ea and s:cs elements. These correspond to western, east asian and complex script languages. All three are required.

If your typeface is standard, you can simply stick it in there, but if you have your own font it requires a few extra steps. First of all, the s:latin element requires a few more attributes, namely svgsrc, eotsrc, woffsrc, ttfsrc, largeimgsrc and smallimgsrc. You have to set all of them if you set one. Just give them empty strings if you don’t want them. Each src attribute corresponds to the URL where the font will be hosted. The standard location will be /_layouts/15/fonts/ and maps to:

Note that the eot-file is required to display the font in Internet Explorer 8 and earlier. A second issue is that Chrome will not handle anti-aliasing correctly for woff/ttf files. It only shows svg fonts well. But sadly, it will use the first one given to it and SharePoint wont let you choose the order. If you have a working fix for this, please email me!

Custom Master Page

Just like the .spcolor file, creating the custom master page simply involves copying an existing one. I went with the Seattle file, but Oslo should work fine. You can grab it from the hive at

or on a live site at /_catalogs/masterpage/Forms/AllItems.aspx. Once copied and renamed, you can upload it  to the library at that URL. In order for this master page to be available as a Composed Look in the Design Manager, we also need a preview page. I will go into this a bit more later, but for now, make a copy of the corresponding preview file, rename it as you did the master page, and upload it appropriately.

If you want to make sure you are seeing your custom master page in the next step, add an element or just some text somewhere.

One simple way to set the master page is to open up the design manager (remember that you need to have activated the Publishing Server Site Collection feature, as well as the Publishing Site feature, for this to be available), choosing any Composed Look from the pre-installed looks, then selecting your custom master page in the Site Layout select menu. Apply this Composed Look and check it out, you should see your custom elements/text. You can also pick your color scheme from the color scheme select menu, but since they don’t have names, it may be hard to make out exactly which one is yours! Use the preview window to the right of the menu to determine what theme is correct.

Composed Look

In order not to have to generate this composed look by selecting the appropriate color scheme every time you change the theme, you can save a specific composed look. To do this, in Site Settings, under the Web Designer Galleries header, you should find the Composed Looks link. Clicking on it will take you to a list containing all the saved themes. Click new item and fill in the information to store your specific combination. Give it a useful name, as this will show up in the gallery.

Custom CSS

In order to accommodate all these different color themes SharePoint actually “compiles” all CSS files it has access to and replaces colors where the appropriate comments exist. For this to work, your CSS file must be registered on the custom master page using a directive like this:

As an example, in order to replace a color in a CSS file with the color defined by the theme’s color scheme, you use the special “ReplaceColor” directive to the SharePoint theming engine. It looks something like this:

If properly run through the SharePoint theming engine, the background color will not be black, but rather whatever color is set in the color scheme (.spcolor file). In the same way, “ReplaceFont” handles fonts defined in the .spfont file:

Master Page Preview File

Now, a small gotcha. If you look at your master page preview file, you see it has a very custom format. The one thing that stands out is on the first line, the default color scheme. What caused me three days of headaches is my assumption that for a certain master page, the intended default .spcolor file should be here. This however causes a slightly surprising thing. The CSS files included in the master page will not be compiled when this .spcolor file is chosen in the composed look. Any replacecolor directives will be ignored. This is only for the .spcolor defined in the preview file. Any other color scheme chosen in the design manager will trigger the appropriate process. This of course works for the built-in master pages as the default color scheme is already defined in the CSS files. No replacements are required.

For example, the Seattle master page preview file defines Palette001.spcolor on its first line. If this combination is chosen, it will still render correctly, because all CSS files associated with this master page have these colors as default. The ReplaceColor directives are ignored.

If you, like me, have added custom elements and custom CSS on the master page however, your own files will also not be compiled. Even worse, if you’ve set your custom .spcolor in the preview file, you will be tearing your hair out trying to figure out why all other themes work but yours.

A simple solution is to define no .spcolor file on top of the master page preview. SharePoint handles this perfectly fine and all color schemes will trigger the replacement process in the SharePoint theming engine.

Putting it all together

Finally, let’s build an example project. We’ll create a color scheme, a custom master page with preview, a custom CSS file with ReplaceColor and tie it all together in a composed look.

Using the files we’ve found so far, we can put together a project that looks something like this:

Composed Looks Example Project

Composed Looks Example Project

The color scheme and master page are provisioned through the feature:

Declarative Feature

Declarative Feature

We create our composed look and apply it using an EventReciever.

Sources:

http://tommdaly.wordpress.com/2012/12/19/deploying-a-custom-composed-look-in-sharepoint-2013/
http://kilianvalkhof.com/2010/css-xhtml/how-to-use-rgba-in-ie/
http://www.estruyf.be/blog/how-to-create-a-master-page-that-is-available-for-the-composed-looks/
http://www.estruyf.be/blog/creating-a-new-color-palette-for-a-sharepoint-2013-composed-look/
http://spitems.wordpress.com/2011/01/17/using-theme-settings-in-style-sheets/

Taking Promises Apart

Fueled by my recent interest in both jQuery’s and Q.js’ implementations of the Promises concept I set out to make one on my own, albeit a lot simpler. In fact, I limited myself to only the basic idea of the original Promises/A specification. In summary it only states that a promise is something that has a method then which take two callbacks, one to be evaluated on success and the other on failure, and returns a promise which, as each call to then returns a promise, will allow chaining of functions and or further promises. It also implies that eventually it will resolve or reject the promise and evaluate callbacks depending on which. In other words, something that is a promise acts like this:

But that’s just half of it, because the promise can also fail and failures work differently. In order to preserve standard error behavior we have to automatically reject all subsequent chained promises, with the original error message. If an exception happens in promiseReturningFunction we want it to travel all the way down the chain, and without evaluating any of the other callbacks in the chain. Chaining functions implies that they each rely on the success of the previous one, or we would not need to chain them in the first place.

When you think about it, that’s all you need to implement the basic promises functionality. Nowhere close to the usability of Q.js or others, but still. Promises.

Note that the Promises/A has been superceeded by Promises/A+ which clarifies and extends the vague and confusing original specification.

Rolling our own

This is not a serious attempt at building a real competitor to the incumbents, this is just for educational purposes. If you just want to take a look at the code, here’s a gist.

Ok, let’s flesh this out!

Unintuitively, although the concept is called Promises, the main object is commonly referred to as the deferred object. In jQuery, you create a deferred object using $.Deferred(), in Q.js using Q.defer() and in AngularJS $q.defer(). Microsoft tried their hand at promises in the WinJS implementation distributed with Metro (or Modern UI) apps, but got absolutely confused and ended up with new WinJS.Promise(init, oncancel).

We forego namespacing and simply create a function called defer.

Thinking back to our definition of a promise, we need to be able to reject or resolve it. Essentially, when our deferred computation has completed or failed, we evaluate the appropriate handlers with the result of the deferred computation. We also need to be able to pass a promise, allowing calling code to assign handlers for success and failure. Think of the deferred object/promise as a reference to a computation that will, at some point, finish. And in order to do something once it finishes, whether success or failure, we need to assign it work to do on completion. This means we need then.

A deferred object can be resolved or rejected, and once it has reached one of those states, it can also hold a result value. It will also hold handlers acquired through then. So we need variables to represent this. For brevity, I will start omitting code we’ve already been through. To see the full implementation, take a look at the gist.

We are now missing reject, resolve and then. I found the last to be the most intuitive way forward. To write it however, we have to consider the first code snippet in this post. The parameters of then can take many forms. In our implementation we will handle three cases.

  1. A function that returns a promise
  2. A function that returns a value
  3. Undefined

And each of those cases applies both to error and success handlers. What I will show you is a naive attempt at covering all the cases. Also, remember that even though the callback handed to then can be non-promises, in order to allow chaining then has to always return a promise.

I take special delight in the essentially recursive call to defer. As you can see, we actually use our own deferred object to bootstrap itself, in a fashion. Let me walk you through it. First we create a new deferred object to keep track of the state of the callback which is run by then. Secondly, we add done and fail to our handlers object, but “wrapped” with the deferred object. I will get into this shortly. The original deferred object might already be in a resolved or rejected state, so we call a function fulfill to optionally run the handlers. This will also be explained later.

Finally, we return a promise to the inner computation.

Wrap is also an interesting JavaScript-esque piece of code. In order to resolve or reject our promise at some point in the future, using the handler, we pass the deferred object with it when we store it in the handlers object, through a technique called a closure.

So wrap returns a new function which closes on the callback and deferred object. This gives us access to the correct promise object when we evaluate the handlers.

First up we make sure the handler exists. If it does not, we replace it with noop, a function which does nothing. Next, we make sure that, if our state is 'rejected' we reject the promise which was returned earlier by then, with our value instead of the value returned by the handler in order to preserve the original error message. Of course, we still evaluate the error handler.

With that taken care of, we evaluate the handler inside a try catch, letting us grab any exception and causing a rejection of the promise.

Now that we have the result of the success handler in the next variable we really only have two cases left to handle, function returning value (or undefined) and function returning a promise. The first case is so simple I used it as an end case, or catch all. But in the case of a promise, as advertised by the object having a method then, we have to hook into it, once more using handlers. This is really the main reason why promises are so great. This is what makes them shine. The callbacks upon callbacks are hidden through this recursive evaluation. Each promise is hooked into the preceeding one and fully dependent on its final state. If the inner promise is resolved, we resolve. If not, we reject. And we do it once the inner computation is done.

We are just missing resolve, reject and fulfill. Their implementations are simple enough. The first two serve to update the state of the object. The last is just a convenience to evaluate the handlers if a resolution is reached.

What strikes me is how elegant the idea is, even though the code shown here is not. The repeated checks for undefined and null are symptomatic of a bad abstraction (or lack of one!). But the idea was never to write a “real” implementation, rather just to explore the concept. Now that you’ve seen how it could be implemented, maybe you have a better understanding of what it is you are doing when you pass a promise.

I also can’t help but reflect on how much easier it was to understand promises having spent some time with haskell. Promises solve the exact problem in JavaScript that monads do in haskell, they let you set up chains of computations that each depend on each other. It should be no surprise then that what we have made here really is a monad. In even more abstract terms, it’s a value with a context passed through transformations. In a language like haskell though, implementing this kind of abstraction is made much more simple by the very design of the language, it was in some ways extended around monads (eg do notation, IO).

If you’ve gotten this far, I have to congratulate you. It is no small feat keeping your concentration through all that. If you are still hungry for more on Promises, do look at the implementation of Q.js, it is surprisingly readable.

Thank you for reading! I hope it helped you get a bit closer to truly grokking promises, as it did me. Please email me at with comments, ideas or questions.

Flattening Callbacks Using Promises

Lately I’ve been working on some async heavy client side code and saw it as a great opportunity to get acquainted with Promises. The project I work on already has jQuery which includes an implementation and so it became the obvious choice.

The simplest and initially most convincing use case is that of flattening deeply nested callback “stairs”.

Before we get into it, is this even a problem? I’d have to argue yes. First of all, error management is just about impossible since none of the asynchronous methods ever throw an exception, only the callbacks have exceptions, and that’s only after the original context is out of scope. Catching these exceptions is nigh on impossible without leaking much of the abstractions. Callbacks break the natural exception bubbling concept. As I write this article a post on the dangers of callbacks is featuring on Hacker News, I suggest reading it for more information.

To solve this problem, much of the async code in NodeJS libraries instead use a primary error parameter on callbacks.

This only complicates the code, adding manual error management in three distinct places. Let’s see what it would look like if the functions, instead of taking callbacks, would return promises.

Take a moment to look at that. If you’re new to deferred objects and promises, you’re probably not convinced. It takes some getting used to so let’s walk through it. Right off the bat you can see there is no longer an error parameter, and no need to check for it. Instead we hand handleError as a secondary callback to then. We have succesfully removed a significant source of distraction in the code. In the traditional NodeJS style code above we had to mix error handling into our function specific code. Even worse, we had to write this over and over again! if (error) does not belong there. Especially since the function that takes the callback has already done error management and passed us the potential failure, forcing us to write error checking again, violating “Don’t Repeat Yourself”.

But there are still improvements to be made! The sharp eyed will notice that several of our anonymous functions add no actual value. We can rewrite it to be even simpler.

Apart from dropping the anonymous functions, we now only pass the error handling in the last call to then. This is not a typo. One of the greatest things about promises is that they recover the effect of exception bubbling (or something quite like it). In the case of jQuery deferreds, at least the rejected promise will bubble and reject subsequent promises. This essentially means that if the first promise is rejected, all following promises are rejected too. Not only that, they are rejected with the original reason. Errors are thus preserved all the way to the end of the chain.

Conclusion

As you can see, promises are a reasonably simple thing to use. We went from an unsightly mess of callbacks with no error management to a resonably flat and surprisingly readable error managed alternative. Reading it out loud actually explains it reasonably well: get the username, then get the subscribed categories, then get the suggested articles and then print them. If something goes wrong, handle it. We’ve written something that mimics synchronous code, which makes it much easier to reason about its behavior.

jQuery’s implementation of promises is not the ideal example of the Promises/A+ specification but usable nonetheless. If you are interested in alternatives, take a look at Kris Kowal’s Q.js, which does handle exceptions.

As always, if you have any thoughts, ideas or comments, please don’t hesitate to email me at .