on-prem = legacy

LOL – Late nights with Azure Search and Attributes for Index Metadata

I’m working with a client right now on modernizing and simplifying their search to use the new Azure Search service. Sure, the examples online are fine, but I wanted to decorate my data classes with attributes dictating the indexing settings for each field. Needless to say, I ended up with something just shy of mad:


This actually works. It’s efficiency is up for debate, but it does work. I suspect I would have had just as much luck manipulating the JSON myself, but where’s the fun in that? Here’s what I started with – a simple Azure table entity poco. Nothing too exciting, just a few fields of relevance:

This is fine – my attributes are there so I can configure indexing on the object itself. This is all well and good, but I need to actually create an index to put documents into:

This also worked well – my index schema gets created from the properties that are decorated with my attribute. The problem starts when I need to actually add the documents to the search indexer. Since I’m inheriting from TableEntity (in this case), additional properties are included (like PartitionKey, RowKey, etc). I need to only get the properties which have indexing metadata, since those match the schema of the index. Apparently, including additional properties in the document you’re submitting to the index causes the call to blow up – not just ignore the extra properties. I have to copy the relevant, decorated properties to a new object on the fly…and voila, you end up with the silliness that is above.

Cloud Patterns are all around us

I’ve been spending quite a bit of time in Tampa recently – most recently Cardinal’s first annual Innovation Summit for our Tampa service. I like flying to Tampa because the airport has been done just right – going from curbside to gate can routinely take only around 10-15 minutes, especially if you have precheck. Even this morning, when pretty busy, it was only about 10 minutes from curb to gate. Charlotte is always crazy and busy, but it’s generally efficient as well, especially with the volume of people going through there – but I think Tampa’s maximized their efficiency even more by putting more services closer to where and when they are needed or consumed (I think you see where this is going).

This got me thinking as I plodded through the airport (top tip: airports aren’t a good place to break-in new sandals), the application architectures that drive cloud efficiency are replicated in real-life all around us.

Let’s look at what typical major airports have to deal with:

  • Large numbers of inbound and outbound traffic, for a variety of different (but known) tasks
  • Many gates, capable of moving large numbers of people and planes in and out, spread out across…
  • …multiple smaller, distributed buildings (airsides/concourses), with a small number of gates per building

Here’s a map of Tampa, which shows this

Tampa International Airport  -

Tampa International Airport –


Distributed Services + Pipes/Filters

And the services that are offered in each area (note, my numbers have been pulled completely and totally out of thin air, I have no idea what volume TPA does):

Main Terminal – ticketing, checkin, greeting spots, baggage claim, cars, cabs, etc. – 10,000 people/hour

People Carrier – only ticketed passengers – 2,000 people/hour

Airside – only ticketed passengers leaving from one of the serviced gates – 2,000 people/hour

Concourse – only ticketed passengers, who passed security, who are leaving from a specific gate at some point today – 1,500 people/hour

Flight – only ticketed passengers, who passed security, who are leaving from gate F84 at a specific time today – 126 people right now

As you can imagine, by the time you get to your gate, you can say with relative confidence that everyone there is only leaving within a small time window from that gate. This is very similar to a pipes and filters pattern, where the same task is repeated over and over again, but the entire process is broken down into discrete steps, which, when executed, process data appropriately and route to the final destination.

Think TSA is slow today? Imagine if it processed every person through the airport, not just flying passengers. It would be a nightmare and horribly inefficient. By using multiple copies of the same service, closer to the consumer, you can distribute load across all of them, leading to significantly shorter wait times while still offering the same (or better) level of service.

Not only have we now distributed a high-transaction process, we’ve pushed it closer to the consumers and put it in a more efficient place in the workflow.

ID + Boarding Pass, Plz – Federated Identity

Anyone who’s ever heard one of my identity sessions knows I generally use an ID + bouncer at a club scenario to describe a federated identity system at work in the real world. This is no different.

In our airport case, upon arriving to the friendly TSA agent, I’m asked to produce some form of ID, plus a boarding pass. This combination of valid identity and time-sensitive token authenticate me to the agent, who then grants access to the terminal. The key here, however, is that the TSA agent has no idea who I am (well, at least I hope not). He (the relying party) relies (see how that works) on

  • an external, trusted third party, like the Secretary of State, in cases of a passport, or NCDMV for a driver’s license to vouch for my identity (e.g., an identity provider),
  • a known set of anti-forgery tools, like UV-sensitive watermarks, specific microprinting, encoded data, RFID, etc. to ensure the identity document is valid (e.g., a signature), and
  • some data points, like my picture, name, hair color, height, departing flight time, etc to validate I have access to what I’m requesting (e.g., attributes or claims, depending on the consumer, the requested access is generally to a resource)
  • In addition, my driver’s license is only valid in some situations, like ID validation within the US. This roughly translates to an audience, or which targets the ID document has been issued for. This is an important security consideration – if someone steals my driver’s license, they can use it for whatever to validate their (my) identity. This doesn’t work outside of the states, however, so it’s only useful to spoof identity within a specific region (e.g., the US), which is the only place that document is valid.

Our modern, federated identity patterns follow a similar pattern – a trusted third party acts as the identity provider and validator, which is what prevents me from having to have a TSA-specific ID. Since TSA trusts NCDMV and SoS, that trust is extended to verifiable documents I possess.

Your cloud app does the same thing – an exchange between identity provider and relying party establishes a trust and a public key; incoming data is then signed with the private key, which can be decrypted and verified with the public key of the IdP.


PreCheck? Preferred? SkyPriority? Priority Queuing

Last one for now. Notice how there are multiple security lines at the airport? TSA PreCheck, First/Biz Class priority screening, Clear card, Crewmember? Here we have people who, one way or another, are of a higher priority to process faster. They could have gone through more rigorous background checks in the case of TSA Pre, or paid more for a flight – either way, each of these people get access to a shorter line for processing (ignore for now that the processing logic of each of these people is also different – imagine we’re all going to the same generic black hole of security screening). These are priority queues – people in those queues are processed first, while standard people either wait in a longer queue, or in some cases, the same queue, but priority people get preference as soon as they arrive. You see this pattern all over airports – check-in, security, gate.

In some cases, priority people have entirely separate queues with a shared TSA agent. In this case, as soon as the priority passenger arrives, the shared TSA agent pauses processing the standard queue to process the priority passenger. Once the priority queue is full, that agent starts working the standard queue again.

In other cases, priority queues have dedicated processors reading those messages. This is closer to the check-in process, where dedicated check-in lines lead to dedicated check-in agents processing first/biz class passengers.

Lastly, we have the gate – first class first, then biz, then the unwashed masses. This is a single queue that calls on specific classes of messages to enter a queue at a specific time. This pattern is the least common of the priority queuing patterns seen in software and is generally used for managing messaging to a legacy system or something which requires specific processing order. You’ll usually see some other processing logic elsewhere that places messages in the queue at a specific time to ensure priority.

Anyway, your code can follow a lot of these patterns as well – for critical messages, perhaps you use a dedicated queue and dedicated processor. These messages are guaranteed to be processed nearly immediately. As soon as the dedicated processor is done with each priority message, it dips back to the queue for the next priority message.

Similarly, for priority-but-not-critical messages, or for priority messages that are rare, shared processors can check the priority queue before doing any standard queue processing. This allows for priority message processing, but not critical, real-time message processing. For example, if your worker is working on a long-running standard message, your priority queue item may wait before being processed, but would be guaranteed to be next in line.

Simple priority queue processing. Source MSDN

Simple priority queue processing @ MSDN

Start Looking Around…

…and you’ll see a lot more patterns. There are more we could dig into just in an airport and all kinds of other scenarios. These cloud patterns make sense because they’ve been proven across the world in all sorts of physical incarnations – why reinvent the wheel if you don’t need to?

I hope this helps connect a few of the dots – as always, feel free to drop me a line or ask a question.

Is the sky falling?

Today was a neat day in the Azure space – Azure Websites has grown up and found itself. We’ve got new units of functionality that can build fully functional apps and workflows, interacting with different systems and moving data around (e.g., BizTalk), through a designer on a web page! Amazing. I came here tonight to dig in and share my thoughts on the new services, but I got sidetracked.

After the announcement, I kept up with social networks, internal and external, and generally there’s a healthy level of excitement. I think once people get their hands dirty, we’ll see a lot more excitement – but what I also saw was sadly typical when these kinds of announcements are made:

What makes me valuable is now available for anyone who can drag-and-drop on a webpage.

And this assertion would be correct, except for one crucial detail – what makes us valuable as software developers, engineers, architects, code monkeys, etc is everything except physically typing out the code. If your only value is in the physical delivery of the code, then it may be too late for you anyway. Let’s back up though.

Engineering + Heavy Clouds

Look at systems engineering over the past 10 years or so. These poor souls have had all kinds of the-sky-is-falling moments. First it was virtualization, then the cloud. Then SaaS – Office 365, SharePoint Online, Exchange. If your job involved managing and monitoring servers and services for your company, your job has been under attack for a decade..

But has it? How many people lost their jobs because their company elected to deploy Office 365? Many people adapted existing skills and learned new ones. I’ve yet to see “WILL ADMINISTER SHAREPOINT FOR FOOD” signs littering once-vibrant office parks. I once read that if change isn’t something you’re interested in, technology is not the industry for you. That statement pretty much summarizes the majority of this post, so feel free to leave now if you’ve gotten what you came for.

In all reality, jobs in the space have stayed relatively stable in relation to other IT jobs. For example, if you look at the trends over the past 10 years, systems administration and software engineering jobs have followed a similar course:

Software Engineer - source:

Software Engineer – source:

Systems Administrator - source:

Systems Administrator – source:

See how similar they are? People aren’t being replaced – in fact, these graphs are a little disingenuous as the ‘overall percentage of matching job postings’ includes most job posts on the internet, which are, of course, exploding. The point, however, is that we’re seeing the same general trends in both systems and software engineering. What did people do? They adapted, they translated existing skills into new platforms, they learned new chunks of knowledge to handle what was coming their way.

Why? Think about it – on what hardware Exchange is installed on is irrelevant. The hardware is commodity now. Administering Exchange requires a certain set of skills; before Office 365 and after those skills aren’t dramatically different. Sure, it’s fewer servers to manage, but how many Exchange admins were really managing that enormous Jet database manually anyway? That knowledge and skillset transfers readily.

Software Development Is Next.

There have been pivotal changes in software in the past ten years – virtualization to an extent and (obviously) cloud. Maximizing efficiency of resources and time-to-market agility has made the cloud what it is. We’re in the ‘coarse’ efficiency now – the next five years or so will bring a whole new era of abstraction and efficiency.

Anyway – let’s get back to my original issue. Software engineering is already going through some significant changes, but one of the biggest ones speaks directly to my original issue above. At some point, skills become commodity. Is there anyone working in dev today that can’t readily find sample code for connecting to 365 and make that work for their application? It’s become so commonplace that that’s no longer a ‘special’ skill – in fact, it has been made so repeatable that we can drag a block with an Office logo on it and connect to SharePoint Online data without writing any code.

Who out there is impressed that I wrote a web app that had a nifty sliding progress bar? Anyone? Bueller? Bueller? That’s not impressive anymore. Years ago, when XMLHttpRequest was new, making a web call without a postback was amazing. Mind = blown. Now there are dozens of frameworks that make many, many lines of code boil down to a single line:


Are you going to put ‘implemented progressBar’ on your resume? It can sit right next to ‘managed to get both feet into shoes.’ I think not.

Platform Dev vs. Implementers

It’s silly to think that what we’re going through now is somehow different from what we’ve gone through forever. But there is, among all the change, one constant that seems to be creating a larger gap daily. Platform development and implementation.

Take a look at what was announced in the Azure space today – ‘Logic Apps,’ ‘API apps,’ – each one a higher-level abstraction of a few existing pieces that let you compose services from existing building blocks. The guys building those blocks have no idea what you’re going to do with them and in what combinations you may elect to build. But it doesn’t matter. The software is written in a way that supports, nay, encourages that kind of development. If I can get away with dragging seven existing blocks onto a designer and solve whatever problem I was attempting to solve, how is that not stuffed to the gills with win?

Better yet, say there’s not a block that does what I need. Let’s build the block and write it properly so other people can use the block. Sounds pretty neat, huh?

Which are you?

When you start a project, do you write a bunch of problem-specific code? Are you one-offing everything you do? How much code reuse is in that block that you just wrote? Your time becomes less valuable for busy work when someone else can implement someone else’s blocks + 10% new code in half the time. If you’re solving a problem, solve it once and use it as many times as possible. Microservices and distributed components are how you gain maximum leverage for the time you’ve already spent.

Platform Dev is the Future

This should be obvious if you’ve made it this far, but I think it’s fairly clear that platform development is where the world of software development is heading. That doesn’t mean custom software won’t exist, but it won’t be ‘built from the ground up.’ It’ll be built from existing blocks of increasingly complex, reusable code. Conceptually this isn’t different from the frameworks we use today. When was the last time you managed memory? Opened raw sockets and sent HTTP requests manually? All of these things are offered by most of the major players today, to abstract complexity and menial, repeatable tasks. As we’re seeing today, the API/reusable block market is exploding. If that means your job is in danger, then perhaps it’s time to start thinking platforms and stop writing code merely for the finger exercise.

Always think about a platform, always think about how you can make your code as generic and reusable as possible, and think about what kinds of other uses it may have. Build for platforms, not for implementations.

Using Organizational Accounts for Azure Subscription Administration

Here’s one we get frequently – no one wants their enterprise Azure account administered by someone’s Xbox Gamertag. ‘’ doesn’t look great during a review of admins, nor is it easy to immediately know who the slayer of noobs may be. Organizational accounts are so much better for management and control over who’s handling your subscription.

There are two main scenarios – adding a user from a directory that’s already connected to the subscription, and also adding a user from a different directory (think managed services – managing a client’s Azure subscription using your existing organizational account).

To make it easier, let’s start with some definitions.


a) Organizational account – also known as an Azure AD account. Ends in your own domain (like or or the out-of-the-box managed domain ( If you’re using Office 365 today, that’s an Azure AD/Organizational Account.

b) MSA – Microsoft Account, like,,, etc.

c) Tenant – organization, specific instance of Azure AD for your organization

And the two scenarios for today:

a) Administering your subscription with an account from your organization

b) Administering another subscription with an account from your organization

Subscriptions + Azure AD Tenants

There is a bit of confusion surrounding how these two seemingly unrelated products work together. You’ll get an Azure AD tenant as part of your setup process of a new Azure Account. That tenant’s domain will end in – of course you can add your own domains (and if you plan SSO, it’ll be a requirement), but out of the box, with zero configuration, you’ll have a Azure AD tenant. That tenant’s only administrator should be the MSA you used to create your Azure subscription. This is an important distinction, because if your tenant and sub were provisioned differently, you may need to make sure your MSA is an administrator of your Azure AD tenant.

Administering your own Subscription

Linking your Azure Subscription to your Azure AD Tenant

This one is quick and easy. Provided you’ve setup your Azure AD domain (there are plenty of tutorials for doing this), it’s a two step process. First we need to link your directory with an Azure AD organization/tenant. Start at and head down to settings. If you’re using the same MSA that you created your Azure subscription with, your MSA should also be an administrator of your Azure AD tenant.

Under settings --> subscriptions, you'll find a list of your subscriptions.

Under settings –> subscriptions, you’ll find a list of your subscriptions.

Now, find your subscription in the list, and at the bottom you’ll see ‘Edit Directory’ in the dark bar.

Find this to link your subscription to a directory.

Find this to link your subscription to a directory.

This will bring up a new box, listing all of the Azure AD Tenants (not Azure subscriptions!) your MSA has access to. In your case, you may only have one. If you have more than one, pick the one that looks familiar or that contains the organizational user you want to have access to that subscription (e.g., if I want to have access to my sub, I need to find the Azure AD tenant that has

You'll see a list of all of the Azure AD Tenants that MSA has access to.

You’ll see a list of all of the Azure AD Tenants that MSA has access to.

Once you’ve picked yours, you’ll need to confirm the change. No sweat.

Next we need to add the administrators.

Adding an Organizational Administrator

If your Azure subscription got linked up to the proper directory in the last step, this is just as easy as adding a new administrator. Under Settings –> Administrators, click Add at the bottom. You should see a simple form to search for a user:

Settings --> Administrators

Settings –> Administrators

You'll notice MSAs resolve as MSAs.

You’ll notice MSAs resolve as MSAs…

...and that organizational accounts will resolve as org accounts.

…and organizational accounts will resolve as org accounts.

Once you’ve typed in the proper account that needs access, check the boxes next to the subscription(s) you want that account to be able to co-adminster. If the name resolves, you’re finished.

Using Org Accounts to Manage Another Org’s Subscription

Take this scenario – you are a managed service provider offering Azure resources and subscriptions as part of your management package. This implies you need to sign into other Azure subscriptions and manage them, as a co-administrator. But your organizational account is already using Azure AD, so you (and any of your employees) still need to use organizational accounts to administer customer subscriptions (instead of MSAs). Fortunately, it’s possible, but with a few more hoops.

Here are some dummy account names to make this more concrete:

a) Your business is Super Azure Consultancy, or SAC, and your domain is @sac.local (this deliberately won’t resolve and won’t be a domain that you’ll find in any Azure AD tenant. The domains need to be internet-resolvable).

b) You offer managed Azure subscriptions to your clients. Client A is Larsen’s Biscuits, @larsen.local

c) Larsen’s has an Azure subscription, tied to their Azure AD tenant (per the instructions in the section above).

d) You need to have your employee, mark@sac.local, administer Larsen’s Azure subscription using his @sac.local account.

Getting another Org’s Users into Your Org’s Tenant

Add a user from Tenant A to Tenant B

First, we need to add mark@sac.local to the larsen.local Azure AD tenant. This sounds simple on the surface, but is a bit trickier. In short, you need to sign into the Azure management portal with an account that has access to both the source directory (sac.local) and the target directory (larsen.local). This is likely easiest using an MSA.

Either create or use a MSA and add that MSA as a global administrator to each Azure AD Tenant, not just the Azure subscription (see here for how to do that).

Next we need to add Mark (mark@sac.local) to the Larsen tenant. Head into Azure AD in the Management Portal, click Users, then Add:

Management Portal --> Azure AD --> Tenant --> Users --> Add

Management Portal –> Azure AD –> Tenant –> Users –> Add

You’re going to want to pick ‘User in another Windows Azure AD directory.’ Next we’ll type in mark@sac.local – if it works, you’ve done it successfully. You can add the user as a user, unless that user is going to be administering Azure AD as well.

If you get a message that the current user doesn’t have access to the directory, you’ll need to be sure the MSA has admin rights to both Azure AD tenants, using the info in the link a little higher.

You'll see this message if you're not using an account that's an administrator in BOTH Azure AD tenants.

You’ll see this message if you’re not using an account that’s an administrator in BOTH Azure AD tenants.

Add External Tenant user as Co-Administrator

At this point, you should be able to follow the directions from “Adding an Organizational Administrator” above, but be sure to use the new user you just added (e.g., mark@sac.local). This will allow mark@sac.local to administer the Azure subscription linked to the @larsens.local Azure AD tenant.

Confused yet?

Feel free to reach out with any problems.

Azure AD – the most basic of basics.

I’ve been speaking about Azure AD + cloud identity a lot recently, mostly at DevCamps along the east coast (which, by the way, if you’re near one you should come spend the day with us – I’ll be in Raleigh 3/17 and Charlotte 4/1).

Identity is a massive topic and as such, trying to cover even a big bite of it within an hour is difficult. I think we all take for granted all of the ‘behind the scenes’ work that goes on in ‘traditional’ identity systems, like Active Directory Domain Services. In a completely controlled domain environment, AD gives us all we’d ever need for user authentication, much of which end users never see. The wonders of Kerberos!

Unfortunately, a lot of these pieces don’t really work over the internet, nor do they work in untrusted (e.g., non-domain-joined machines). Imagine trying to resolve the netbios name of your domain via DNS on your iPhone – something tells me ‘CORP’ won’t resolve, and even if it does, chances are it won’t be the ‘CORP’ you’re looking for.

So how do we maintain the same fidelity of user experience on non-domain joined devices? Let’s take a brief history lesson. Modern identity really isn’t all that modern at all, at least conceptually – most of the same functions occur, but the underlying implementations are different (and more visible to the end user).


We’ll start with a hyper-simplification of what Kerberos does.

I need a TGT (Ticket to Get Tickets), which I can swap out for a Service Ticket. That service ticket I then use to get more tickets for authenticating to resources. In addition, the service I’m attempting to authenticate to needs to know how to understand the ticket and can validate the ticket.

In Windows, this process is highly transparent to the user. When I sign into a domain joined machine, I’m getting my TGT. Generally the TGT is longer-lasting so I can continue to request service tickets. When the login process is spinning, part of what you’re waiting on is getting the TGT.

I found a rad image from the late 90s (I especially love that hot-pink starburst) outlining the back-and-forth between a PC, service + Kerberos KDC:

Kerberos circa 2000. Credit: MSDN

Kerberos circa 2000. Credit: MSDN


‘Modern’ Federated Identity

Now let’s look at a modern federated identity platform and how it works:

Federated Identity cloud design pattern. Source:

Federated Identity cloud design pattern. Source:

In a federated identity system, we have a Secure Token Service that issues validatable tokens containing all of your claims (attributes – name, unique ID, UPN, etc). The service needing authentication is a Relying Party because it Relies on the STS for authentication. You, as the consumer, request a token (ticket) from the STS, which then in turn is used to authenticate to the remote service. The remote service trusts the STS and via a key exchange, can ensure the validity of the ticket. Sound familiar?

Trusted v Untrusted

So why does identity throw us as devs + IT pros completely off our game? Simple – until now, this process was almost completely transparent. Someone else had done all the hard work for us, mostly through NT/Kerberos. In a trusted environment, our machine is part of the domain so our machine authenticates and so do our users. In the untrusted environment (e.g., non-domain devices, the internet, etc), the entirety of the process happens over HTTP. HTTP is visible, because most users encounter it when they try to sign into a protected property. The remote web servers of SharePoint Online, for example, have no idea what I’m coming from, except that it’s an internet browser. There’s no intrinsic domain PC trust, there’s no visibility into that machine – so when a user tries to authenticate, these token swaps are going to happen, and the easiest way to do that with browser-based applications is by just sending the user from point-to-point in the authentication chain.

That’s why you’re going to see the Office 365 login page, followed by ‘Redirecting you to your organization…’ followed by your STS (like ADFS), before being redirected back into the target application. In a properly-configured environment, this should result in as-close-as-is-possible SSO experience.

What we’re really seeing here is modern identity platforms aren’t really all that modern at all – in fact, what makes them modern is merely the implementation (not that it is trivial, far from it, in fact) and the protocol used for communication is simply more visible to the end users.

I’ll have some more hands-on getting started around Azure AD soon.

Adding Existing VHDs to Azure Resource Groups

Azure Resource Groups. Simultaneously the most exciting and most frustrating part of Azure vNext. While powerful, today they’re quite inflexible – no API is exposed to allow editing info (like names), moving resources from one to another, really any management at all. And the sprawl – the sprawl! It’s awful. So many auto-generated resource groups. With a finite number allowed per account, actually using them appropriately is priority.

I elected to move some of my older deployments into a new virtual network, to gain the internal load balancer and to bring things on par with what’s current. For VMs, this is generally easy – delete the VM, keeping the disks, then recreate the VM in the new network. No big deal. Through this same way, I could also deploy these machines into the same resource group, containing similar resources.

Until I got into the new portal – I can’t, for the life of me, find any way within the new portal to create a new VM from an existing VHD. I dug through the PowerShell cmdlets for a bit, still couldn’t find much – particularly for adding that new VM into an existing resource group.

Side note: we still can’t upload our own Resource Group Templates? Really?


For now, create a throwaway virtual machine in your resource group, with the proper cloud service name (DNS name, in the new portal), networking, storage account, etc. In the old portal (or through PowerShell), create your new VM from the existing VHD as you would normally, making sure to pick the cloud service which was created in your resource group. You can delete the throwaway VM now too. Check back in the portal a bit later and your existing VHD should now be in a new VM, in a resource group of your choosing.

Protecting WCF with Azure AD

Mobile services. MVC Web APIs. They’re all over and ubiquitous now. In some cases though, WCF is still the platform of choice for service developers. Sometimes it’s interoperability with other services, sometimes it’s just not wanting to rewrite old code – or perhaps a large part of your architecture requires service hosts + factories – whatever the reason, it’s not feasible to rewrite or rearchitect large swaths of systems just to add authentication.

Typical, 3-tier Apps

Let’s look at a typical three-tier app – UI, service + data:

Firewalls keep everyone out except our web app.

Firewalls keep everyone out except our web app.

Here, we’ve got a web app which talks to an unauthenticated service, which talks to some data. Pretty simple stuff. The box indicates the internet permeability – if the web server is the only thing exposed to the internet, this is a generally OK approach. If nothing has access to the service except the target consumer, what could go wrong? How hard could it be?

In this model, your web app is handling authenticating clients, which then proxies requests back to the service. A pretty standard model.


But let’s extrapolate further. It’s 2015 – how many services only have a single web client anymore? Everything is connected and everything is slurping data from everything else. Not only are we going to have trusted hosts, we’re going to have mobile apps, perhaps we expose an API – there are lots of things to consider. Here’s how I’d expect our app to look from a modern looking glass:

Our service now has to handle multiple clients - and they're not all coming from a trusted host.

Our service now has to handle multiple clients – and they’re not all coming from a trusted host.

Our app has to handle some number of potentially unknown entry points.

So what are we to do? We can leverage OAuth server-to-server to secure our services. This way, we’re not publishing a static key into our mobile applications – as anyone who’s seen how trivial it is to decompile an Android app knows, you can never trust the client. There are two options – we’re going to dig into the first (application-only, 2-legged OAuth) – and we’ll follow up with 3-legged in a later post.

Server-to-server OAuth (e.g., 2-legged, Application-Only)

Our first option is:

  • significantly better than no authentication
  • somewhat better than static keys/shared credentials
  • useful for locking-down an API, but not necessarily at a user-level

This is application-only access, also known as two-legged OAuth. In this model, the server doesn’t need to know a specific user principal, but an app principal. A principal token is required by the service and is requested by the client:

  • STS-known client requests an OAuth token from STS (e.g., Azure AD)
  • STS-known client sends token in header (Authorization: Bearer eQy…)
  • Service expects header, retrieves token
  • Service validates token with Azure AD

User OAuth (aka 3-legged)

This option is somewhat different – instead of using an application prinicipal to connect to our service, we’re going to be connecting on behalf of the user. This is useful for:

  • applications that rely on a service to security-trim data returned
  • services that are public or expected to have many untrusted clients

In this model, users authenticate + authorize the application to act on his/her behalf, issuing an access token upon successful authentication. The application then uses that token for requesting resources. If you’ve ever used an app for Facebook or Twitter, you’ve been through a 3-legged OAuth model.

WCF Service Behaviors + Filters

There are two pieces we need to build – a server-side Service Behavior that inspects + validates the incoming token, and a client-side filter that acquires a token and stuffs it in the Authorization header before the WCF message is sent. We’ve used this pattern on a few projects now – this is a good resource for more details and similar implementations.

We need to do three things:

  • Update the WCF service with a message inspector that will inspect the current message
  • Update the WCF client to request a token and include it in the outgoing message
  • Update the WCF service’s Azure AD application manifest to expose that permission to other Azure AD applications

Service Side

Service side, we want something which can inspect the messages as they come in; this inspector will both grab the token off the request + validate it. This started life from the above blog post, but was modified for the newer identity objects in .net + WIF 4.5 and for clarity.

Some Code

Looking through here, we’ll find pretty much everything we need to make our WCF service ready to receive and validate tokens. The highlights:


Here we’re doing the main chunk of work.AfterReceiveRequest
is fired after the WCF subsystem receives the request, but before it’s passed onto the service implementation. Sounds like the perfect place for us to do some work. We’re starting by inspecting the header for the Authorization header, finding the federation metadata for that tenant, and validating the token. System.IdentityModel.Tokens.JwtSecurityTokenHandler
does the heavy lifting here (it’s a NuGet package), handling the roundtrips to AAD to validate the token based on the configuration. Take note of the TokenValidationParameters
object; any mis-configuration here will cause validation to fail.


Next we’ll need to create a service behavior, instructing WCF to apply our new MessageInspector to the MessageInspector collection.  


This is a simple class to add the service behavior to an extension that can be controlled via config.


This is a helper for returning error data in the result of a broken authentication call. We can return a WWW-Authenticate header here (in the case of a 401), instructing the caller where to retrieve a valid token.  

Service Configuration

The last piece is updating the WCF service’s config to enable that message inspector:

Client Side

Now that our service is setup to both find and validate tokens, now we need our clients to acquire and send those tokens over in the headers. This is much simpler, thanks to the ADAL – getting a token is about a five-line operation.


The AuthorizationHeaderMessageInspector runs on a client and handles two things – acquiring the token and putting it in the proper header.


This is a simple helper for acquiring the token using ADAL. You can modify this to pop a browser window and get user tokens, or using this code, it’s completely headless and an application-only token. ADAL also handles caching the tokens, so no need to fret about calling this on every request.


A wrapper to add the AuthorizationHeaderMessageInspector to your outgoing messages.


A simple extension method for adding the endpoint behavior to the service client.


Wrap it all together, here’s what we’ve got – a simple call to ServiceClient.Endpoint.AddAuthorizationEndpointBehavior() and our client is configured with a token. Your call out should include the header, which the service will consume and validate, sending you back some data. Easy, right?!

Configuring Azure AD

The last thing we need to do is configure Azure AD with our applications. Those client IDs and secrets aren’t just going to create themselves, eh? I’m hopeful if you’ve made it this far that adding a new application to Azure AD isn’t taxing your mental resources, so I won’t get into how to create the applications. Once they’re created, we need to do two things – expose the permission and grant that to our client. Let’s go.

App Manifest

The app manifest is the master configuration of your application’s configuration. You can access it via the portal, using the ‘Manage Manifest’ in the menu of your app:

Download your manifest and check it out. It’s likely pretty simple. We want to add a chunk to the oauth2Permissions block, then upload it back into the portal:

What’s this doing, exactly? It’s allowing us to expose a specific permission to Azure AD, so we can grant that permission to other Azure AD applications. Head over to your client application’s Azure AD app record. Near the bottom of the ‘Configure’ section, we’ll see ‘Permissions to other applications’ – let’s find our service in this list. Once you’ve found it, you can grant specific permissions. Extrapolate this further, and you can see there’s certainly room for improvement. Perhaps other permission sets and permissions are available within our app? They can be exposed and granted here.

Ed note: It's finally out of preview!

Ed note: It’s finally out of preview!

It’s a trap Wrap

What you’ve seen is a ready-to-go example of using Azure AD to authenticate your applications. We’ll dig into using user tokens at both the application and service levels in a later post, but in the meantime, you’ve now got a way that’s better than shared credentials or *gasp* no authentication on your services.

Consolidating Services for Maximum Efficiency

Every day we’re bombarded with vendors, providers and *ahem* consultants telling us we need to break up our apps for maximum scalability & availability for the cloud. This is true – one of the keys to maximizing efficiency is breaking your applications down into units of work that can be scaled independently. This comes at a cost, however – imagine your Azure cloud project is made up of a dozen web services all spread out over a dozen web roles? That gets pretty expensive pretty quickly, especially if you’re targeting SLA – 24 instances for a dozen services?

Let’s say you’re migrating a few LOB apps to the cloud – does each of these need its own scalability unit? Perhaps they work in concert together, or perhaps no single application taxes the underlying servers more than a few percentage points at a time. Is this really the most efficient use of resources?

Breaking your application into smaller units on expected scalability boundaries is a best practice, without a doubt – but does that require that each service live within its own instance all the time? Let’s revisit our guidance and turn it into something more palatable and more explicit. We’ll look at two examples, reusing queue workers and stacking multiple IIS applications on a web role.

We’ll touch on two cloud pattern implementations – competing consumers + service consolidation.

Application Design vs. Deployment Design

We should always write and design our services in discrete scalability units – but how they are deployed is a deployment question, not a design question. If we write expecting each of these units to be in its own container (e.g., stateless, multi-instance), what hosts the code (and what else the host is hosting) becomes irrelevant until our scalability requirements dictate we move those units to individual hosts.

Multiple Personalities

In a complex application, reliable messaging is a must, especially as we start to break our application into multiple discreet services. Reliable messaging through queues is a standard pattern, but how do we design our queues and workers? Are they one-to-one between queue/message and worker implementation? Perhaps they are when we roll to production or ramp beyond pilot, but this is the cloud…why are we deciding this now?

Let’s start with a simple application – this application uses two queues, one for regular messages and one for exception messages. Each queue has queue workers that are dedicated to doing performing a task:

  1. for regular messages, the message is persisted to storage and a service is called.
  2. for exception messages, the exception is parsed and certain types of exception messages are transformed to regular messages and dropped back onto the regular queue.

How is our Azure solution arranged to accomplish this?

  1. Storage Account
    1. Regular Q
    2. Exception Q
  2. Cloud Service
    1. Regular worker
    2. Exception worker

They seem awfully similar, yes? Since we’re writing this code in its entirety, what’s to keep us from having a queue worker with multiple personalities?

Here’s our code today:

public interface IQueueManager
  object Read();
  object Peek();
  void Delete(object message);

public class MainQueueManager : IQueueManager

public class ExceptionQueueManager : IQueueManager

And the worker role’s Run() method. The ExceptionWorkerRole’s code would be remarkably similar, but in a separate role (thus incurring additional cost).

public class Worker
  public void Run()
    while (true)
      var queueManager = new MainQueueManager();
      var message = queueManager.Peek();
      var mainQueueProcessor = new MainQueueProcessor();

This implementation is fine, but now we’re stuck with a single function queue worker – it reads a single queue and processes that message a single way. There are two specific behaviors we’re concerned with – the Queue which gets read, and the actions that happen depending on the specific message. If each of these is abstracted appropriately, our worker role can do whatever is appropriate under current load. This could be as simple as a condition in your worker role’s Run() method that checks all known queues, the type of message, then invokes one of a variety of implementations of an interface. Sound like a lot? It’s not. Let’s dive in.

Looking at all Queues

We’ll start with which queue to read from – we want to read from all queues that we get from configuration (or through DI, or however you choose to get them), so let’s abstract that away a bit:

public class Worker
  private IEnumerable<IQueueManager> queues;
  public void Run()
    //get queues - create queue managers from config, pass from constructor, etc.
    queues = new List<IQueueManager>();
    while (true)
      foreach (var q in queues)

All we’ve done here is allow a single worker to look into multiple queues – without changing any of our core queue management or processing code. The code that handles reading/peeking/deleting from the queue has all stayed the same, but our code can now run in a shared host with other queue readers. On to the queue processor.

Processing Different Messages

Now we need to do something depending on which message we get. This could be based on whatever criteria fits the need, for instance, perhaps the message data can be deserialized into a type with a specific flag. In our simple case, we’ll just look at the message data and check for a string.

private void DoQueueThings(IQueueManager queueManager)
  var message = queueManager.Peek();
  if (message.ToString() == "uh oh")
    var eqp = new ExceptionQueueProcessor();
    var mqp = new MainQueueProcessor();


This is a very simple consolidation pattern, which allows each worker instance the ability to do more than one function – excellent for controlling cost or management. There are times this isn’t appropriate – make sure functionality is grouped with similar tasks – e.g., if one task requires high CPU, it may not be appropriate to scale with something processing a high volume of low CPU bound tasks.

We’re also implementing a simple competing consumers pattern (with a queue, not through our service consolidation), where some number of instances greater than one is capable of reading off of a single queue. This may not be appropriate when operations are not idempotent (e.g., function or task cannot be repeated without side effects – iterators are a classic example of this), or where the order of messages is important.

Stacking Sites in Web Roles

Previously we were looking at code changes we can make in our worker roles that can optimize efficiency – but these are all code changes. Next we’ll tackle IIS – this is all configuration, no code changes required.

Anyone familiar with running IIS knows you can run multiple web sites within a single IIS deployment – not only can multiple sites run on IIS, multiple sites on the same port can run using host headers. Azure provides you the same capability – through ServiceDefinition.csdef. It’s not immediately obvious in Visual Studio to accomplish this, but it’s quite easy once you’re comfortable with how Azure services are configured. There are two things we need to handle – one is the actual configuration of the web role, the other is making sure all sites are built and copied appropriately during packaging.


Our solution configuration is pretty simple – an Azure cloud service with a single web role and two MVC web apps. We’ve also got a single HTTP endpoint for our web role on port 80.


We’ll start in ServiceDefinition.csdef – here we’ll essentially configure IIS. We can add multiple sites, virtual directories, etc – for our purposes, we need to add an additional site. The ServiceDefinition.csdef probably looks a bit like this currently:

<WebRole name="AppWebRole" vmsize="Small">
    <Site name="Web">
        <Binding name="Endpoint1" endpointName="Endpoint1" />
    <InputEndpoint name="Endpoint1" protocol="http" port="80" />

Pretty straightforward. Now we need to let the Azure fabric know that we’re hosting multiple sites within this web role. You’ll note there’s a ‘sites’ collection – here we’ll add our additional sites (I’ve changed the endpoint names to make them more readable). Let’s take a quick look at what’s been done:

  1. First – we’ve added the physicalDirectory attribute to the Site tag. This is important and we’ll dig into it in a moment.
  2. The bindings have been updated to add the appropriate host header. In this example, we want our main site to receive all traffic, so we’re using *
  3. The second site should only respond to specific traffic, in this case,
<WebRole name="AppWebRole" vmsize="Small">
    <Site name="Web" physicalDirectory="..\..\apps\AppWebRole">
        <Binding name="App1Endpoint" endpointName="HttpEndpoint" hostHeader="*" />
    <Site name="Web2" physicalDirectory="..\..\apps\WebApplication1">
        <Binding name="App2Endpoint" endpointName="HttpEndpoint" hostHeader=""/>
    <InputEndpoint name="HttpEndpoint" protocol="http" port="80" />


Now that our service is all configured, we need to get our files in the right place. By default, the packager will package the other projects in entirety as part of the package. This is bad for a lot of reasons, but also removes some functionality – for instance, web.config transforms for projects outside the main project (the one associated with the web role) won’t happen, because msbuild is never called for that project.

There are multiple ways to accomplish this floating around the internet – some suggest updating the Azure ccproj to add additional build events as part of the msbuild script. I personally have used post-build events to locally publish to a subdirectory for each web project, something like this:

%WinDir%\Microsoft.NET\Framework\v4.0.30319\MSBuild.exe "$(ProjectPath)" /T:PipelinePreDeployCopyAllFilesToOneFolder /P:AutoParameterizationWebConfigConnectionStrings=false /P:Configuration=$(ConfigurationName);PreBuildEvent="";PostBuildEvent="";PackageAsSingleFile=false;_PackageTempDir="$(ProjectDir)..\CLOUD_PROJECT\apps\$(ProjectName)"

Make sure to change CLOUD_PROJECT to the name of your cloud project. Dropping this into your post-build event for each web project will build your projects, copy the output to the target folder (matching the project name, this could be changed) before CSPack builds the Azure package.

As we continue to see businesses take their first steps into the cloud, the service consolidation pattern is guaranteed to be a common sight – on-premise IIS servers stacked to the gills with individual web sites is a common pattern, especially for dev/test and low-priority LOB apps. While not advocating for inappropriately reusing existing service hosts, maximizing efficiency of the ones you have can greatly ease your first move to the cloud.

Updating ADFS 3 for WIA on Windows Tech Preview

If you’re using the Windows Technical Preview, you may notice that ADFS presents you with a Forms login instead of using WIA from IE on a domain machine. This little chunk of powershell includes most of the major browsers that support WIA – you can plunk this into your ADFS server and get it going:

Set-AdfsProperties -WIASupportedUserAgents @("MSIE 6.0", "MSIE 7.0; Windows NT", "MSIE 8.0", "MSIE 9.0", "MSIE 10.0; Windows NT 6", "Windows NT 6.4; Trident/7.0", "Windows NT 6.4; Win64; x64; Trident/7.0", "Windows NT 6.4; WOW64; Trident/7.0", "Windows NT 6.3; Trident/7.0", "Windows NT 6.3; Win64; x64; Trident/7.0", "Windows NT 6.3; WOW64; Trident/7.0", "Windows NT 6.2; Trident/7.0", "Windows NT 6.2; Win64; x64; Trident/7.0", "Windows NT 6.2; WOW64; Trident/7.0", "Windows NT 6.1; Trident/7.0", "Windows NT 6.1; Win64; x64; Trident/7.0", "Windows NT 6.1; WOW64; Trident/7.0", "MSIPC", "Windows Rights Management Client")


In version 3, ADFS tries to intelligently present a user experience that’s appropriate for the device. Browsers that support WIA (like IE) provide silent sign on, while others (like Chrome, Firefox, mobile browsers, etc) are presented with a much more attractive and user friendly forms-based login. This is all automatically handled now, unlike before where users with non-WIA devices were prompted with an ugly and potentially dangerous basic 401 authentication box (if they were prompted at all).

This means you can now design a login page for non WIA devices that might include your logo, some disclaimers or legal text.


iOS 8x and ADFS 3

TenantDbContext for Table Storage

For anyone who’s used the MVC templates with multi-organizational authentication, you’ll inevitably end up with a bunch of generated entity framework goo for keeping track of organizational certificate thumbprints for orgs who have logged into your app. This is lame. We’re creating two tables with a single column each in SQL?! I’ve never heard of a better use of table storage. Not to mention I’ve got to now pay for a SQL Azure instance, even if my app doesn’t need it.

This speaks to a larger issue – how frequently are we, as developers, using SQL by default? Do we really need relational data? Are we enforcing constraints in our service layer as we should? We are?! This makes SQL even more ridiculous in this scenario.

I decided to build one that uses table storage. You’ll need a few things

a) the source

b) update your web.config to indicate the issuer registry type

The VS solution is on github here:

It’s dependent upon Azure configuration and Azure storage. Licensed under MIT, if you find it useful I’d just ask you drop me a line and let me know what neat thing you’re working on!

« Older posts

© 2015

Theme by Anders Noren, modified by jpd Up ↑