on-prem = legacy

Adding Existing VHDs to Azure Resource Groups

Azure Resource Groups. Simultaneously the most exciting and most frustrating part of Azure vNext. While powerful, today they’re quite inflexible – no API is exposed to allow editing info (like names), moving resources from one to another, really any management at all. And the sprawl – the sprawl! It’s awful. So many auto-generated resource groups. With a finite number allowed per account, actually using them appropriately is priority.

I elected to move some of my older deployments into a new virtual network, to gain the internal load balancer and to bring things on par with what’s current. For VMs, this is generally easy – delete the VM, keeping the disks, then recreate the VM in the new network. No big deal. Through this same way, I could also deploy these machines into the same resource group, containing similar resources.

Until I got into the new portal – I can’t, for the life of me, find any way within the new portal to create a new VM from an existing VHD. I dug through the PowerShell cmdlets for a bit, still couldn’t find much – particularly for adding that new VM into an existing resource group.

Side note: we still can’t upload our own Resource Group Templates? Really?


For now, create a throwaway virtual machine in your resource group, with the proper cloud service name (DNS name, in the new portal), networking, storage account, etc. In the old portal (or through PowerShell), create your new VM from the existing VHD as you would normally, making sure to pick the cloud service which was created in your resource group. You can delete the throwaway VM now too. Check back in the portal a bit later and your existing VHD should now be in a new VM, in a resource group of your choosing.

Protecting WCF with Azure AD

Mobile services. MVC Web APIs. They’re all over and ubiquitous now. In some cases though, WCF is still the platform of choice for service developers. Sometimes it’s interoperability with other services, sometimes it’s just not wanting to rewrite old code – or perhaps a large part of your architecture requires service hosts + factories – whatever the reason, it’s not feasible to rewrite or rearchitect large swaths of systems just to add authentication.

Typical, 3-tier Apps

Let’s look at a typical three-tier app – UI, service + data:

Firewalls keep everyone out except our web app.

Firewalls keep everyone out except our web app.

Here, we’ve got a web app which talks to an unauthenticated service, which talks to some data. Pretty simple stuff. The box indicates the internet permeability – if the web server is the only thing exposed to the internet, this is a generally OK approach. If nothing has access to the service except the target consumer, what could go wrong? How hard could it be?

In this model, your web app is handling authenticating clients, which then proxies requests back to the service. A pretty standard model.


But let’s extrapolate further. It’s 2015 – how many services only have a single web client anymore? Everything is connected and everything is slurping data from everything else. Not only are we going to have trusted hosts, we’re going to have mobile apps, perhaps we expose an API – there are lots of things to consider. Here’s how I’d expect our app to look from a modern looking glass:

Our service now has to handle multiple clients - and they're not all coming from a trusted host.

Our service now has to handle multiple clients – and they’re not all coming from a trusted host.

Our app has to handle some number of potentially unknown entry points.

So what are we to do? We can leverage OAuth server-to-server to secure our services. This way, we’re not publishing a static key into our mobile applications – as anyone who’s seen how trivial it is to decompile an Android app knows, you can never trust the client. There are two options – we’re going to dig into the first (application-only, 2-legged OAuth) – and we’ll follow up with 3-legged in a later post.

Server-to-server OAuth (e.g., 2-legged, Application-Only)

Our first option is:

  • significantly better than no authentication
  • somewhat better than static keys/shared credentials
  • useful for locking-down an API, but not necessarily at a user-level

This is application-only access, also known as two-legged OAuth. In this model, the server doesn’t need to know a specific user principal, but an app principal. A principal token is required by the service and is requested by the client:

  • STS-known client requests an OAuth token from STS (e.g., Azure AD)
  • STS-known client sends token in header (Authorization: Bearer eQy…)
  • Service expects header, retrieves token
  • Service validates token with Azure AD

User OAuth (aka 3-legged)

This option is somewhat different – instead of using an application prinicipal to connect to our service, we’re going to be connecting on behalf of the user. This is useful for:

  • applications that rely on a service to security-trim data returned
  • services that are public or expected to have many untrusted clients

In this model, users authenticate + authorize the application to act on his/her behalf, issuing an access token upon successful authentication. The application then uses that token for requesting resources. If you’ve ever used an app for Facebook or Twitter, you’ve been through a 3-legged OAuth model.

WCF Service Behaviors + Filters

There are two pieces we need to build – a server-side Service Behavior that inspects + validates the incoming token, and a client-side filter that acquires a token and stuffs it in the Authorization header before the WCF message is sent. We’ve used this pattern on a few projects now – this is a good resource for more details and similar implementations.

We need to do three things:

  • Update the WCF service with a message inspector that will inspect the current message
  • Update the WCF client to request a token and include it in the outgoing message
  • Update the WCF service’s Azure AD application manifest to expose that permission to other Azure AD applications

Service Side

Service side, we want something which can inspect the messages as they come in; this inspector will both grab the token off the request + validate it. This started life from the above blog post, but was modified for the newer identity objects in .net + WIF 4.5 and for clarity.

Some Code

Looking through here, we’ll find pretty much everything we need to make our WCF service ready to receive and validate tokens. The highlights:


Here we’re doing the main chunk of work.AfterReceiveRequest
is fired after the WCF subsystem receives the request, but before it’s passed onto the service implementation. Sounds like the perfect place for us to do some work. We’re starting by inspecting the header for the Authorization header, finding the federation metadata for that tenant, and validating the token. System.IdentityModel.Tokens.JwtSecurityTokenHandler
does the heavy lifting here (it’s a NuGet package), handling the roundtrips to AAD to validate the token based on the configuration. Take note of the TokenValidationParameters
object; any mis-configuration here will cause validation to fail.


Next we’ll need to create a service behavior, instructing WCF to apply our new MessageInspector to the MessageInspector collection.  


This is a simple class to add the service behavior to an extension that can be controlled via config.


This is a helper for returning error data in the result of a broken authentication call. We can return a WWW-Authenticate header here (in the case of a 401), instructing the caller where to retrieve a valid token.  

Service Configuration

The last piece is updating the WCF service’s config to enable that message inspector:

Client Side

Now that our service is setup to both find and validate tokens, now we need our clients to acquire and send those tokens over in the headers. This is much simpler, thanks to the ADAL – getting a token is about a five-line operation.


The AuthorizationHeaderMessageInspector runs on a client and handles two things – acquiring the token and putting it in the proper header.


This is a simple helper for acquiring the token using ADAL. You can modify this to pop a browser window and get user tokens, or using this code, it’s completely headless and an application-only token. ADAL also handles caching the tokens, so no need to fret about calling this on every request.


A wrapper to add the AuthorizationHeaderMessageInspector to your outgoing messages.


A simple extension method for adding the endpoint behavior to the service client.


Wrap it all together, here’s what we’ve got – a simple call to ServiceClient.Endpoint.AddAuthorizationEndpointBehavior() and our client is configured with a token. Your call out should include the header, which the service will consume and validate, sending you back some data. Easy, right?!

Configuring Azure AD

The last thing we need to do is configure Azure AD with our applications. Those client IDs and secrets aren’t just going to create themselves, eh? I’m hopeful if you’ve made it this far that adding a new application to Azure AD isn’t taxing your mental resources, so I won’t get into how to create the applications. Once they’re created, we need to do two things – expose the permission and grant that to our client. Let’s go.

App Manifest

The app manifest is the master configuration of your application’s configuration. You can access it via the portal, using the ‘Manage Manifest’ in the menu of your app:

Download your manifest and check it out. It’s likely pretty simple. We want to add a chunk to the oauth2Permissions block, then upload it back into the portal:

What’s this doing, exactly? It’s allowing us to expose a specific permission to Azure AD, so we can grant that permission to other Azure AD applications. Head over to your client application’s Azure AD app record. Near the bottom of the ‘Configure’ section, we’ll see ‘Permissions to other applications’ – let’s find our service in this list. Once you’ve found it, you can grant specific permissions. Extrapolate this further, and you can see there’s certainly room for improvement. Perhaps other permission sets and permissions are available within our app? They can be exposed and granted here.

Ed note: It's finally out of preview!

Ed note: It’s finally out of preview!

It’s a trap Wrap

What you’ve seen is a ready-to-go example of using Azure AD to authenticate your applications. We’ll dig into using user tokens at both the application and service levels in a later post, but in the meantime, you’ve now got a way that’s better than shared credentials or *gasp* no authentication on your services.

Consolidating Services for Maximum Efficiency

Every day we’re bombarded with vendors, providers and *ahem* consultants telling us we need to break up our apps for maximum scalability & availability for the cloud. This is true – one of the keys to maximizing efficiency is breaking your applications down into units of work that can be scaled independently. This comes at a cost, however – imagine your Azure cloud project is made up of a dozen web services all spread out over a dozen web roles? That gets pretty expensive pretty quickly, especially if you’re targeting SLA – 24 instances for a dozen services?

Let’s say you’re migrating a few LOB apps to the cloud – does each of these need its own scalability unit? Perhaps they work in concert together, or perhaps no single application taxes the underlying servers more than a few percentage points at a time. Is this really the most efficient use of resources?

Breaking your application into smaller units on expected scalability boundaries is a best practice, without a doubt – but does that require that each service live within its own instance all the time? Let’s revisit our guidance and turn it into something more palatable and more explicit. We’ll look at two examples, reusing queue workers and stacking multiple IIS applications on a web role.

We’ll touch on two cloud pattern implementations – competing consumers + service consolidation.

Application Design vs. Deployment Design

We should always write and design our services in discrete scalability units – but how they are deployed is a deployment question, not a design question. If we write expecting each of these units to be in its own container (e.g., stateless, multi-instance), what hosts the code (and what else the host is hosting) becomes irrelevant until our scalability requirements dictate we move those units to individual hosts.

Multiple Personalities

In a complex application, reliable messaging is a must, especially as we start to break our application into multiple discreet services. Reliable messaging through queues is a standard pattern, but how do we design our queues and workers? Are they one-to-one between queue/message and worker implementation? Perhaps they are when we roll to production or ramp beyond pilot, but this is the cloud…why are we deciding this now?

Let’s start with a simple application – this application uses two queues, one for regular messages and one for exception messages. Each queue has queue workers that are dedicated to doing performing a task:

  1. for regular messages, the message is persisted to storage and a service is called.
  2. for exception messages, the exception is parsed and certain types of exception messages are transformed to regular messages and dropped back onto the regular queue.

How is our Azure solution arranged to accomplish this?

  1. Storage Account
    1. Regular Q
    2. Exception Q
  2. Cloud Service
    1. Regular worker
    2. Exception worker

They seem awfully similar, yes? Since we’re writing this code in its entirety, what’s to keep us from having a queue worker with multiple personalities?

Here’s our code today:

public interface IQueueManager
  object Read();
  object Peek();
  void Delete(object message);

public class MainQueueManager : IQueueManager

public class ExceptionQueueManager : IQueueManager

And the worker role’s Run() method. The ExceptionWorkerRole’s code would be remarkably similar, but in a separate role (thus incurring additional cost).

public class Worker
  public void Run()
    while (true)
      var queueManager = new MainQueueManager();
      var message = queueManager.Peek();
      var mainQueueProcessor = new MainQueueProcessor();

This implementation is fine, but now we’re stuck with a single function queue worker – it reads a single queue and processes that message a single way. There are two specific behaviors we’re concerned with – the Queue which gets read, and the actions that happen depending on the specific message. If each of these is abstracted appropriately, our worker role can do whatever is appropriate under current load. This could be as simple as a condition in your worker role’s Run() method that checks all known queues, the type of message, then invokes one of a variety of implementations of an interface. Sound like a lot? It’s not. Let’s dive in.

Looking at all Queues

We’ll start with which queue to read from – we want to read from all queues that we get from configuration (or through DI, or however you choose to get them), so let’s abstract that away a bit:

public class Worker
  private IEnumerable<IQueueManager> queues;
  public void Run()
    //get queues - create queue managers from config, pass from constructor, etc.
    queues = new List<IQueueManager>();
    while (true)
      foreach (var q in queues)

All we’ve done here is allow a single worker to look into multiple queues – without changing any of our core queue management or processing code. The code that handles reading/peeking/deleting from the queue has all stayed the same, but our code can now run in a shared host with other queue readers. On to the queue processor.

Processing Different Messages

Now we need to do something depending on which message we get. This could be based on whatever criteria fits the need, for instance, perhaps the message data can be deserialized into a type with a specific flag. In our simple case, we’ll just look at the message data and check for a string.

private void DoQueueThings(IQueueManager queueManager)
  var message = queueManager.Peek();
  if (message.ToString() == "uh oh")
    var eqp = new ExceptionQueueProcessor();
    var mqp = new MainQueueProcessor();


This is a very simple consolidation pattern, which allows each worker instance the ability to do more than one function – excellent for controlling cost or management. There are times this isn’t appropriate – make sure functionality is grouped with similar tasks – e.g., if one task requires high CPU, it may not be appropriate to scale with something processing a high volume of low CPU bound tasks.

We’re also implementing a simple competing consumers pattern (with a queue, not through our service consolidation), where some number of instances greater than one is capable of reading off of a single queue. This may not be appropriate when operations are not idempotent (e.g., function or task cannot be repeated without side effects – iterators are a classic example of this), or where the order of messages is important.

Stacking Sites in Web Roles

Previously we were looking at code changes we can make in our worker roles that can optimize efficiency – but these are all code changes. Next we’ll tackle IIS – this is all configuration, no code changes required.

Anyone familiar with running IIS knows you can run multiple web sites within a single IIS deployment – not only can multiple sites run on IIS, multiple sites on the same port can run using host headers. Azure provides you the same capability – through ServiceDefinition.csdef. It’s not immediately obvious in Visual Studio to accomplish this, but it’s quite easy once you’re comfortable with how Azure services are configured. There are two things we need to handle – one is the actual configuration of the web role, the other is making sure all sites are built and copied appropriately during packaging.


Our solution configuration is pretty simple – an Azure cloud service with a single web role and two MVC web apps. We’ve also got a single HTTP endpoint for our web role on port 80.


We’ll start in ServiceDefinition.csdef – here we’ll essentially configure IIS. We can add multiple sites, virtual directories, etc – for our purposes, we need to add an additional site. The ServiceDefinition.csdef probably looks a bit like this currently:

<WebRole name="AppWebRole" vmsize="Small">
    <Site name="Web">
        <Binding name="Endpoint1" endpointName="Endpoint1" />
    <InputEndpoint name="Endpoint1" protocol="http" port="80" />

Pretty straightforward. Now we need to let the Azure fabric know that we’re hosting multiple sites within this web role. You’ll note there’s a ‘sites’ collection – here we’ll add our additional sites (I’ve changed the endpoint names to make them more readable). Let’s take a quick look at what’s been done:

  1. First – we’ve added the physicalDirectory attribute to the Site tag. This is important and we’ll dig into it in a moment.
  2. The bindings have been updated to add the appropriate host header. In this example, we want our main site to receive all traffic, so we’re using *
  3. The second site should only respond to specific traffic, in this case,
<WebRole name="AppWebRole" vmsize="Small">
    <Site name="Web" physicalDirectory="..\..\apps\AppWebRole">
        <Binding name="App1Endpoint" endpointName="HttpEndpoint" hostHeader="*" />
    <Site name="Web2" physicalDirectory="..\..\apps\WebApplication1">
        <Binding name="App2Endpoint" endpointName="HttpEndpoint" hostHeader=""/>
    <InputEndpoint name="HttpEndpoint" protocol="http" port="80" />


Now that our service is all configured, we need to get our files in the right place. By default, the packager will package the other projects in entirety as part of the package. This is bad for a lot of reasons, but also removes some functionality – for instance, web.config transforms for projects outside the main project (the one associated with the web role) won’t happen, because msbuild is never called for that project.

There are multiple ways to accomplish this floating around the internet – some suggest updating the Azure ccproj to add additional build events as part of the msbuild script. I personally have used post-build events to locally publish to a subdirectory for each web project, something like this:

%WinDir%\Microsoft.NET\Framework\v4.0.30319\MSBuild.exe "$(ProjectPath)" /T:PipelinePreDeployCopyAllFilesToOneFolder /P:AutoParameterizationWebConfigConnectionStrings=false /P:Configuration=$(ConfigurationName);PreBuildEvent="";PostBuildEvent="";PackageAsSingleFile=false;_PackageTempDir="$(ProjectDir)..\CLOUD_PROJECT\apps\$(ProjectName)"

Make sure to change CLOUD_PROJECT to the name of your cloud project. Dropping this into your post-build event for each web project will build your projects, copy the output to the target folder (matching the project name, this could be changed) before CSPack builds the Azure package.

As we continue to see businesses take their first steps into the cloud, the service consolidation pattern is guaranteed to be a common sight – on-premise IIS servers stacked to the gills with individual web sites is a common pattern, especially for dev/test and low-priority LOB apps. While not advocating for inappropriately reusing existing service hosts, maximizing efficiency of the ones you have can greatly ease your first move to the cloud.

Updating ADFS 3 for WIA on Windows Tech Preview

If you’re using the Windows Technical Preview, you may notice that ADFS presents you with a Forms login instead of using WIA from IE on a domain machine. This little chunk of powershell includes most of the major browsers that support WIA – you can plunk this into your ADFS server and get it going:

Set-AdfsProperties -WIASupportedUserAgents @("MSIE 6.0", "MSIE 7.0; Windows NT", "MSIE 8.0", "MSIE 9.0", "MSIE 10.0; Windows NT 6", "Windows NT 6.4; Trident/7.0", "Windows NT 6.4; Win64; x64; Trident/7.0", "Windows NT 6.4; WOW64; Trident/7.0", "Windows NT 6.3; Trident/7.0", "Windows NT 6.3; Win64; x64; Trident/7.0", "Windows NT 6.3; WOW64; Trident/7.0", "Windows NT 6.2; Trident/7.0", "Windows NT 6.2; Win64; x64; Trident/7.0", "Windows NT 6.2; WOW64; Trident/7.0", "Windows NT 6.1; Trident/7.0", "Windows NT 6.1; Win64; x64; Trident/7.0", "Windows NT 6.1; WOW64; Trident/7.0", "MSIPC", "Windows Rights Management Client")


In version 3, ADFS tries to intelligently present a user experience that’s appropriate for the device. Browsers that support WIA (like IE) provide silent sign on, while others (like Chrome, Firefox, mobile browsers, etc) are presented with a much more attractive and user friendly forms-based login. This is all automatically handled now, unlike before where users with non-WIA devices were prompted with an ugly and potentially dangerous basic 401 authentication box (if they were prompted at all).

This means you can now design a login page for non WIA devices that might include your logo, some disclaimers or legal text.


iOS 8x and ADFS 3

TenantDbContext for Table Storage

For anyone who’s used the MVC templates with multi-organizational authentication, you’ll inevitably end up with a bunch of generated entity framework goo for keeping track of organizational certificate thumbprints for orgs who have logged into your app. This is lame. We’re creating two tables with a single column each in SQL?! I’ve never heard of a better use of table storage. Not to mention I’ve got to now pay for a SQL Azure instance, even if my app doesn’t need it.

This speaks to a larger issue – how frequently are we, as developers, using SQL by default? Do we really need relational data? Are we enforcing constraints in our service layer as we should? We are?! This makes SQL even more ridiculous in this scenario.

I decided to build one that uses table storage. You’ll need a few things

a) the source

b) update your web.config to indicate the issuer registry type

The VS solution is on github here:

It’s dependent upon Azure configuration and Azure storage. Licensed under MIT, if you find it useful I’d just ask you drop me a line and let me know what neat thing you’re working on!

Smartphones Don’t Seem Very Smart Anymore

That headline may come across as rather spoiled, a la Louis CK’s always entertaining rant against people complaining about wifi on airplanes (you’re in a chair. In the air). The recent release of the giant iPhone got me reconsidering what I want in a phone, and it made me realize that Microsoft’s mobile offering is brilliantly ahead of its time, while its recent pivot is incredibly disappointing.


I’ve flown the Windows Phone banner since 2011, when I first got a Samsung Focus. I had an iPhone 4 for work and while it was fine, the freshness of WP was irresistible. I still have that Focus, actually. Sure it was limited at first – not many apps, some rather glaring omissions (copy/paste, anyone)? But the live tiles were excellent, the integration with other Microsoft services was top notch and the hubs were excellent. In fact, the hubs were what sold me on Windows Phone. All of my social stuff in one interface? Music? Pictures? Each one had its own hub, dedicated to the function it was supposed to do. I even bought my wife one, an HTC HD7, for Valentine’s Day, plus a Zune Pass. It was great. Time went on, and more phones were released, some of which I bought, like the bright blue Lumia 900. That thing was a beast. Carolina Panthers blue, amazing screen and the fastest WP money could buy.


How slick is that? Great design, amazing to hold and looked fantastic.

Fast forward to now, I’ve gone through a few more, Lumia 920, 925, 1020 and (my current phone), a 1520, each one better than the one before.

The release of WP8 promised even more awesome stuff. We saw new tile sizes, BitLocker, IE10, SD card support – a slew of new stuff. Things were looking up.

Enter WP 8.1

The future. Credit:

The future. Credit:

WP 8.1 ushered in a new era of ‘completeness’ for WP. IE11 is onboard, including much broader HTML5 compatibility, plus support for all kinds of new sizes, internals and some pretty slick interface tweaks. Not to mention the Notification Center, and of course, Cortana.

But there’s another change WP8.1 brought in that feels like a massive step backward and plays to Windows Phone’s only significant weakness – third-party app support.

I know app counts and the like on WP are a running joke, but there’s a reason I bring this up – most of the hubs have been abandoned in favor of apps. Take the people hub, which I used to spend the majority of my phone time in – it’s now severely crippled by requiring an external app to source the info, as well as requiring that app to interact with the source that surfaced it (e.g., Facebook, Twitter, etc). The Me tile, which used to contain social notifications is now just a shell of what it was before – nothing useful at all, just a picture of myself and an option to check in. What does check in do? Open an app. sigh. I’ll come back to this later.

App Gap?

I picked up an iPad a week ago to do some Azure testing. I haven’t used an iOS device since the iPhone 4 in 2011, so I was curious to see how things had improved. I’ve got a Nexus 7 and a Nexus 5, which I used mostly for testing apps and to give my Google Glass a mobile connection (I keep the Nexus 5 in my bag). I’ve never been a huge fan of Android – I understand lots of people like it, but it’s just not for me.

But I missed the tiles – the home screen is still just a bunch of icons. No passing data, nothing – just icons. Android is the same way.

Anyway, so I’ve got this new iPad, and it’s pretty slick – but the quality and speed of the applications was immediately apparent. And I immediately got sad. Seeing the gap between the quality of iOS apps v. Android apps brought me to a pretty terrible conclusion:

If Android is this far behind iOS, Windows Phone is…never going to see quality apps.

Thus my sadness. As much as I love the platform, it just can’t compete when it comes to apps. How can I, as an indie dev (I don’t do any mobile at my current gig, at least not right now), get developers to give a shit about the platform? Facebook doesn’t care, Twitter doesn’t care – and judging by the quality of what’s in the stores today, developers don’t care either. It’s either a half-ass attempt at porting an existing app or it’s someone’s less-than-brilliant interpretation of Microsoft’s modern design language. In short, the vast majority of apps that exist on the platform are shit.

Function-centric vs App-centric

And here inlies the problem. WP can’t compete with apps. This is not opinion, it’s fact. Look at the ratings in the Windows Store(s) – they’re atrocious. Apps are consistently non-performant, released-once and never updated, don’t work or are just generally of poor quality. Lots of major services don’t even produce apps – and if they do, they are perpetually in beta (looking at you, Instagram), or Microsoft builds them themselves (e.g., Facebook). Sure, there are third-party developers who do amazing things, but they are few and far between (note – someone needs to make a Readit-style reddit app for iOS – Readit is easily the best of breed right now).

Go do a quick web search for ‘Xbox Music WP8.1′ – I’ll wait. Back? It’s definitely another casualty of the ‘let’s make everything an app’ decision. The reviews are atrocious, and even now, with a dedicated team + about a dozen releases, it’s still nothing like what it used to be.

Context Awareness is the ‘next big thing’™

But Windows Phone’s strength was always in context-awareness. The hubs focused on what you wanted to do, and surfaced relevant data and actions. The tiles, when pinned, were updated with relevant information based on what you wanted to do – want to pin a specific stock in your portfolio? Cool, pin away. Chat was unified between SMS, Windows Live, Facebook – seems familiar, eh Google Hangouts? WP had this in 2011.

Me tile notifications centered all of your social updates (retweets, wall posts, linkedin (lol) interactions) into a single feed. “I want to see what’s happening with my network,” you’d say, and the Me tile & people hub delivered.

Cortana takes context sensitivity to the next level:

Remind me next time I talk to my wife to ask her about the company picnic.

Next time I talked to my wife, I got a popup reminding me what I asked to be reminded about.

Next time I’m at Lowe’s, remind me to pick up 8 G2 halogen bulbs.

Upon arriving in the Lowe’s parking lot, I’m reminded (from the previous weekend, no less) to get my bulbs.

Even better, Cortana can remind me about flights, news, topics, weather, traffic, all kinds of things, based on my behavior and implicit/explicit metadata. Some things I told her about, others she gleaned from email, searches, messages, etc.

But Microsoft’s desperation to appease the masses and move WP has resulted in the ‘app for everything’ decision. It certainly has its merits, faster updates, more ‘xxx,000 apps in our store!’ ads, etc – but I think it weakens the core strength of what WP is all about.

It’s an app centric world…for now

So again, we’ve ended up in the prickly spot where Microsoft’s released something brilliant, but it’s the wrong time. iOS and Android are app launchers – there’s nothing inherently ‘smart’ about the OS. Sure, more things are starting to poke in, but for the most part, the ‘innovation’ is all left up to third-party developers. I don’t want to be in and out of apps all day long. I want to see what’s relevant at a specific time, or in a certain place, or…

And people don’t care about context. In emerging markets, just the fact that there’s a thing in your hand connected to the internet and it didn’t cost a fortune is a miracle in itself. Think those people are going to bitch about app quality or availability?

First-world consumers don’t appear to care about context either, at least right now. Hopefully, this changes, but Microsoft’s got to focus their message. It’s OK that WP doesn’t have 1 million apps.

Microsoft – reintegrate with major players. Bring back and modularize the hubs to allow third party access (or at least make it easier for you to maintain them) without all of the pain of these terrible third-party apps.

Integration and context are the next big winners. Opening 46 different apps to do your work daily will get tiresome. We don’t need apps for every website on the planet. We need focused, relevant information without the noise.

Unfortunately, I don’t know where the platform goes from here. Nadella’s clearly focusing mobile dev where the money is, as the Microsoft apps on other platforms are very good (seriously, almost every service I use has an outstanding app on iOS), while the WP equivalents are hit or miss (see Skype).

I swapped my 1520 for a friend’s 5s for a week while I consider a big-ass iPhone 6. I have no idea what I’ll end up using daily, but I’ll say this. This damn 5s is tiny. I have no idea how people use this phone daily.

Headless Azure AD User Creation

If you’ve spent any time with the Azure Graph API, it’s pretty sweet. Federated identity for the masses, with almost zero drama. Up until now I was mostly doing logins, queries, etc. with Azure AD, but for my latest project, I need to create both new domains and new users in those domains. I haven’t tackled creating new domains yet, because that looks like it’s going to be a royal PITA (automating powershell? ick) – but I kicked down the user path today. Went pretty well, until I got stopped cold adding a user.

Here’s some code

Adding a user with ADALv2 + the Active Directory Graph Client is pretty easy. Both are NuGet packages and simplify the process considerably. You can also post the JSON yourself, which you can find here on MSDN.

But I’m using ADGC, so here’s a quick snip of the required fields you’ll need to get a user created:

var gc = new GraphConnection(accessToken); //get this below
var pp = new PasswordProfile() //required
  ForceChangePasswordNextLogin = true,
  Password = "Watermelon1!"
var u = new User
  DisplayName = displayName,
  UserPrincipalName = upn,
  PasswordProfile = pp,
  MailNickname = displayName.Replace(" ", string.Empty),
  UsageLocation = "US",
  AccountEnabled = true,
  ImmutableId = Guid.NewGuid().ToString()
  var p = gc.Add(u);
  Console.WriteLine("Created {0}, immutable ID: {1}", p.UserPrincipalName, p.ImmutableId);
catch (GraphException ex)
  Console.ForegroundColor = ConsoleColor.Red;
  Console.WriteLine("{0}: {1}", ex.ErrorMessage, ex.ErrorResponse.Error.Message);

Pretty straightforward…until you get to


chances are you’ll blow up with a 403 Forbidden. In fact, chances are high, like 100% this will happen (if it doesn’t let me know).

Graph Read/Write

For whatever reason, and I’m still trying to figure out exactly why, the ‘Read and write directory data’ permission doesn’t appear to allow adding users. I’m assuming this is because they want a user who’s in one of the principal management roles, like User Administrator (see this post for some info on that), as opposed to allowing app principals to do this. The long and short is that the Graph API wants you to go through an OAuth browser flow to delegate a token from a user with the appropriate permissions. If you’re using ADALv2, there’s no


overload that’ll do this. This is fine, unless you want to automate the creation of these users.

What are you to do?

Fortunately, we can use the OAuth password grant_type to request a token with only a user’s username & password.

AccessTokens & the Graph

You’ll need a few things to get setup. I’m not going to go into much detail here, because if you’re encountering this issue chances are you’re already well setup. We need to request a token from the AAD STS, including both the user’s username/password, as well as the client ID and secret of the app you’re developing. Here’s a sample:

var reqUri = " TENANT ID OR NAME/oauth2/token";
var postData = "resource=00000002-0000-0000-c000-000000000000&client_id={0}&grant_type=password&{1}";
var wc = new WebClient();
wc.Headers.Add("Content-Type", "application/x-www-form-urlencoded");
var response = wc.UploadString(reqUri, "POST", string.Format(postData, AppId, EncodedKey));
var tokenData = JObject.Parse(response);
return tokenData["access_token"].Value();

Let’s deconstruct this request a bit, shall we? TENANT ID OR NAME/oauth2/token

This is where we’re posting our token request


The main chunk of our request.

resource The resource you’re trying to access. In this case, it’s the graph, which always has this ID: 00000002-0000-0000-c000-000000000000
client_id your app’s client ID
client_secret Important – make sure to URL encode your key before putting it here
grant_type password
username your UPN (e.g.,
password this should be obvious
scope Get data in the OpenID Connect format: openid

And make sure you URL encode the form/values before submitting, otherwise it’s 400s for you.

And that’s it

Provided you got a token back and didn’t have any problems with the request, you should be able to tack that access token into the header

Authorization: Bearer ...access token...

or you can stuff that into

new GraphConnection(accessToken)

if you’re using the Graph Client wrapper.

Create away! You’re off.


I’ll be at SharePoint Saturday, come out!

I’ll be speaking at SharePoint Saturday in Charlotte on September 20th. My session is on building a ‘shim’ for passive logon to Microsoft Online Services (Office 365, Azure, any federated apps) so you can do whatever kind of wild and crazy business logic you need to do. Should be fun!

Azure Admins vs. Azure AD Admins

This is a point that’s a bit ambiguous. I’m an Azure Service administrator, so I should be able to access the Azure AD associated with that tenant, right?

In a word, no.


If you need access to Azure Active Directory to add apps, users, etc – it’s pretty simple. It’s something we get a lot whenever one of our clients is using Azure, has granted us Azure service admin rights (E.g., Azure co-administrator), but we’re still asking for more access.

You can do this at the Azure portal ( or, for adding a new user in your org or setting permissions to existing people in your org, the Office 365 Admin Portal (

Your Azure AD administrator (if you’re the creator of the account, this should be you, otherwise it’s time to track down the admin) just needs to add you. You can add

  • A new account in your organization – this creates a brand new account in your Azure AD tenant. No different from creating a new cloud-based user for O365.
  • An existing Microsoft Account – for sharing with the plebs who don’t have an Office account
  • An existing organizational account in another directory - for sharing with other organizations that use Azure AD (e.g., or

Once the account is in Azure AD, you can set an access level. More info on access levels below.

Office Portal

Adding someone from the Office portal is easy. Open the portal as an admin, go to users and get crackin’ – but note, the Office portal only allows you to add users to your own organization. Could you imagine the chaos if you had options to add users from other orgs in here? LOL

office portal

But – you can add rights to an existing user from within the Office portal. Find the user in the list and you can set their access levels in there. Here’s my Live account, which was already added to my AD – I can set admin rights right from the portal.



Azure Portal

You can do everything from in here. Let’s dig in. Head into the Azure portal and find Active Directory in the left nav. If you don’t see your directory and you’re a member of multiple Azure subscriptions, click the Subscriptions filter to make sure you find the right one.

Once you’ve found your directory, click it and go to the Users header – here we’ll add a new user and grant them rights to AD. Alternatively, if you already see the account you want to grant access to, you can do that from in here as well.


Pick one of those three options (they’re outlined above) – if it’s not a valid account for the type you picked, when you try to go the next step, it’ll bomb out:



Otherwise, when you click next, you’ll be able to both set a name & display name for the new user (so that your Xbox name ‘N00b Slay3r 123′ doesn’t show up in your corporate apps) and grant an access level. Once you click the check, that new user is ready to go.


Role Play

Let’s talk about roles for a minute – you can find comparisons of each role here ( – I’m not going to repeat any of that.

Global administrators – if you’re using Azure for anything beyond test/dev, or if you’re using it with Office 365/Intune/CRM, you probably want as few global administrators as possible. The Global part of global administrator is just that – quite global.

So what’s an admin to do when he/she has devs clamoring to add apps? I can’t seem to find anything ‘official’ about what access levels have access to add applications to your Azure AD – but it appears that User administrators do have this right. It makes sense, since ‘user administration’ is really principal administration, and all of these apps are new principals.

If you only want to give your devs access to add new apps, User Administrator might be a good role for them.

DocuSign + SharePoint Online

Document signing + SharePoint Online with non-licensed users flew across my desk at WTFHQ today. Here’s the basic requirement:

Licensed users need to store PDFs in SharePoint while getting them digitally signed by non-licensed SharePoint users. Preferably without requiring creating External Users in SharePoint.

I’m happy to report this works quite easily with DocuSign + SharePoint Online. You’ll get SharePoint doc lib integration e.g., the context menu will offer you options on documents like ‘Sign with DocuSign’ and ‘Get Signatures with DocuSign.’

We’ll start with the main case – a licensed SharePoint user needs to get a document signed by a non-SharePoint user. It’s really easy.

It’s really pretty easy.

When you use the ‘Get Signatures with DocuSign’ option, you’re sent over to DocuSign, where you login (either with your Office 365 account or your DocuSign account), and it’s all vanilla from there – mark the required fields to collection from the target and drop their email addresses into the invitation field.



Document library integration

Invite non-licensed users to sign

Invite non-licensed users to sign

The target gets an email, inviting them to sign the document. When they click the link, they’ll go directly to DocuSign to sign the document, no SharePoint account required. They get an option to download the document, and the signed copy goes back into your document library. For our scenario, we’re finished.

Mail received by the target signer

Mail received by the target signer

Signed document automatically back in SharePoint library

Signed document automatically back in SharePoint library

« Older posts

© 2015

Theme by Anders Noren, modified by jpd Up ↑