jpd.ms

on-prem = legacy

Is the sky falling?

Today was a neat day in the Azure space – Azure Websites has grown up and found itself. We’ve got new units of functionality that can build fully functional apps and workflows, interacting with different systems and moving data around (e.g., BizTalk), through a designer on a web page! Amazing. I came here tonight to dig in and share my thoughts on the new services, but I got sidetracked.

After the announcement, I kept up with social networks, internal and external, and generally there’s a healthy level of excitement. I think once people get their hands dirty, we’ll see a lot more excitement – but what I also saw was sadly typical when these kinds of announcements are made:

What makes me valuable is now available for anyone who can drag-and-drop on a webpage.

And this assertion would be correct, except for one crucial detail – what makes us valuable as software developers, engineers, architects, code monkeys, etc is everything except physically typing out the code. If your only value is in the physical delivery of the code, then it may be too late for you anyway. Let’s back up though.

Engineering + Heavy Clouds

Look at systems engineering over the past 10 years or so. These poor souls have had all kinds of the-sky-is-falling moments. First it was virtualization, then the cloud. Then SaaS – Office 365, SharePoint Online, Exchange. If your job involved managing and monitoring servers and services for your company, your job has been under attack for a decade..

But has it? How many people lost their jobs because their company elected to deploy Office 365? Many people adapted existing skills and learned new ones. I’ve yet to see “WILL ADMINISTER SHAREPOINT FOR FOOD” signs littering once-vibrant office parks. I once read that if change isn’t something you’re interested in, technology is not the industry for you. That statement pretty much summarizes the majority of this post, so feel free to leave now if you’ve gotten what you came for.

In all reality, jobs in the space have stayed relatively stable in relation to other IT jobs. For example, if you look at the trends over the past 10 years, systems administration and software engineering jobs have followed a similar course:

Software Engineer - source: indeed.com/jobtrends

Software Engineer – source: indeed.com/jobtrends

Systems Administrator - source: indeed.com/jobtrends

Systems Administrator – source: indeed.com/jobtrends

See how similar they are? People aren’t being replaced – in fact, these graphs are a little disingenuous as the ‘overall percentage of matching job postings’ includes most job posts on the internet, which are, of course, exploding. The point, however, is that we’re seeing the same general trends in both systems and software engineering. What did people do? They adapted, they translated existing skills into new platforms, they learned new chunks of knowledge to handle what was coming their way.

Why? Think about it – on what hardware Exchange is installed on is irrelevant. The hardware is commodity now. Administering Exchange requires a certain set of skills; before Office 365 and after those skills aren’t dramatically different. Sure, it’s fewer servers to manage, but how many Exchange admins were really managing that enormous Jet database manually anyway? That knowledge and skillset transfers readily.

Software Development Is Next.

There have been pivotal changes in software in the past ten years – virtualization to an extent and (obviously) cloud. Maximizing efficiency of resources and time-to-market agility has made the cloud what it is. We’re in the ‘coarse’ efficiency now – the next five years or so will bring a whole new era of abstraction and efficiency.

Anyway – let’s get back to my original issue. Software engineering is already going through some significant changes, but one of the biggest ones speaks directly to my original issue above. At some point, skills become commodity. Is there anyone working in dev today that can’t readily find sample code for connecting to Salesforce.com/Dropbox/Office 365 and make that work for their application? It’s become so commonplace that that’s no longer a ‘special’ skill – in fact, it has been made so repeatable that we can drag a block with an Office logo on it and connect to SharePoint Online data without writing any code.

Who out there is impressed that I wrote a web app that had a nifty sliding progress bar? Anyone? Bueller? Bueller? That’s not impressive anymore. Years ago, when XMLHttpRequest was new, making a web call without a postback was amazing. Mind = blown. Now there are dozens of frameworks that make many, many lines of code boil down to a single line:

$("thing").progressBar();

Are you going to put ‘implemented progressBar’ on your resume? It can sit right next to ‘managed to get both feet into shoes.’ I think not.

Platform Dev vs. Implementers

It’s silly to think that what we’re going through now is somehow different from what we’ve gone through forever. But there is, among all the change, one constant that seems to be creating a larger gap daily. Platform development and implementation.

Take a look at what was announced in the Azure space today – ‘Logic Apps,’ ‘API apps,’ – each one a higher-level abstraction of a few existing pieces that let you compose services from existing building blocks. The guys building those blocks have no idea what you’re going to do with them and in what combinations you may elect to build. But it doesn’t matter. The software is written in a way that supports, nay, encourages that kind of development. If I can get away with dragging seven existing blocks onto a designer and solve whatever problem I was attempting to solve, how is that not stuffed to the gills with win?

Better yet, say there’s not a block that does what I need. Let’s build the block and write it properly so other people can use the block. Sounds pretty neat, huh?

Which are you?

When you start a project, do you write a bunch of problem-specific code? Are you one-offing everything you do? How much code reuse is in that block that you just wrote? Your time becomes less valuable for busy work when someone else can implement someone else’s blocks + 10% new code in half the time. If you’re solving a problem, solve it once and use it as many times as possible. Microservices and distributed components are how you gain maximum leverage for the time you’ve already spent.

Platform Dev is the Future

This should be obvious if you’ve made it this far, but I think it’s fairly clear that platform development is where the world of software development is heading. That doesn’t mean custom software won’t exist, but it won’t be ‘built from the ground up.’ It’ll be built from existing blocks of increasingly complex, reusable code. Conceptually this isn’t different from the frameworks we use today. When was the last time you managed memory? Opened raw sockets and sent HTTP requests manually? All of these things are offered by most of the major players today, to abstract complexity and menial, repeatable tasks. As we’re seeing today, the API/reusable block market is exploding. If that means your job is in danger, then perhaps it’s time to start thinking platforms and stop writing code merely for the finger exercise.

Always think about a platform, always think about how you can make your code as generic and reusable as possible, and think about what kinds of other uses it may have. Build for platforms, not for implementations.

Using Organizational Accounts for Azure Subscription Administration

Here’s one we get frequently – no one wants their enterprise Azure account administered by someone’s Xbox Gamertag. ‘noobslayer@hotmail.com’ doesn’t look great during a review of admins, nor is it easy to immediately know who the slayer of noobs may be. Organizational accounts are so much better for management and control over who’s handling your subscription.

There are two main scenarios – adding a user from a directory that’s already connected to the subscription, and also adding a user from a different directory (think managed services – managing a client’s Azure subscription using your existing organizational account).

To make it easier, let’s start with some definitions.

Definitions

a) Organizational account – also known as an Azure AD account. Ends in your own domain (like jpd.ms or cardinalsolutions.com) or the out-of-the-box managed domain (yourorg.onmicrosoft.com). If you’re using Office 365 today, that’s an Azure AD/Organizational Account.

b) MSA – Microsoft Account, like someone@hotmail.com, @outlook.com, @live.com, etc.

c) Tenant – organization, specific instance of Azure AD for your organization

And the two scenarios for today:

a) Administering your subscription with an account from your organization

b) Administering another subscription with an account from your organization

Subscriptions + Azure AD Tenants

There is a bit of confusion surrounding how these two seemingly unrelated products work together. You’ll get an Azure AD tenant as part of your setup process of a new Azure Account. That tenant’s domain will end in .onmicrosoft.com – of course you can add your own domains (and if you plan SSO, it’ll be a requirement), but out of the box, with zero configuration, you’ll have a .onmicrosoft.com Azure AD tenant. That tenant’s only administrator should be the MSA you used to create your Azure subscription. This is an important distinction, because if your tenant and sub were provisioned differently, you may need to make sure your MSA is an administrator of your Azure AD tenant.

Administering your own Subscription

Linking your Azure Subscription to your Azure AD Tenant

This one is quick and easy. Provided you’ve setup your Azure AD domain (there are plenty of tutorials for doing this), it’s a two step process. First we need to link your directory with an Azure AD organization/tenant. Start at https://manage.windowsazure.com and head down to settings. If you’re using the same MSA that you created your Azure subscription with, your MSA should also be an administrator of your Azure AD tenant.

Under settings --> subscriptions, you'll find a list of your subscriptions.

Under settings –> subscriptions, you’ll find a list of your subscriptions.

Now, find your subscription in the list, and at the bottom you’ll see ‘Edit Directory’ in the dark bar.

Find this to link your subscription to a directory.

Find this to link your subscription to a directory.

This will bring up a new box, listing all of the Azure AD Tenants (not Azure subscriptions!) your MSA has access to. In your case, you may only have one. If you have more than one, pick the one that looks familiar or that contains the organizational user you want to have access to that subscription (e.g., if I want joe@company.com to have access to my sub, I need to find the Azure AD tenant that has joe@company.com).

You'll see a list of all of the Azure AD Tenants that MSA has access to.

You’ll see a list of all of the Azure AD Tenants that MSA has access to.

Once you’ve picked yours, you’ll need to confirm the change. No sweat.

Next we need to add the administrators.

Adding an Organizational Administrator

If your Azure subscription got linked up to the proper directory in the last step, this is just as easy as adding a new administrator. Under Settings –> Administrators, click Add at the bottom. You should see a simple form to search for a user:

Settings --> Administrators

Settings –> Administrators

You'll notice MSAs resolve as MSAs.

You’ll notice MSAs resolve as MSAs…

...and that organizational accounts will resolve as org accounts.

…and organizational accounts will resolve as org accounts.

Once you’ve typed in the proper account that needs access, check the boxes next to the subscription(s) you want that account to be able to co-adminster. If the name resolves, you’re finished.

Using Org Accounts to Manage Another Org’s Subscription

Take this scenario – you are a managed service provider offering Azure resources and subscriptions as part of your management package. This implies you need to sign into other Azure subscriptions and manage them, as a co-administrator. But your organizational account is already using Azure AD, so you (and any of your employees) still need to use organizational accounts to administer customer subscriptions (instead of MSAs). Fortunately, it’s possible, but with a few more hoops.

Here are some dummy account names to make this more concrete:

a) Your business is Super Azure Consultancy, or SAC, and your domain is @sac.local (this deliberately won’t resolve and won’t be a domain that you’ll find in any Azure AD tenant. The domains need to be internet-resolvable).

b) You offer managed Azure subscriptions to your clients. Client A is Larsen’s Biscuits, @larsen.local

c) Larsen’s has an Azure subscription, tied to their Azure AD tenant (per the instructions in the section above).

d) You need to have your employee, mark@sac.local, administer Larsen’s Azure subscription using his @sac.local account.

Getting another Org’s Users into Your Org’s Tenant

Add a user from Tenant A to Tenant B

First, we need to add mark@sac.local to the larsen.local Azure AD tenant. This sounds simple on the surface, but is a bit trickier. In short, you need to sign into the Azure management portal with an account that has access to both the source directory (sac.local) and the target directory (larsen.local). This is likely easiest using an MSA.

Either create or use a MSA and add that MSA as a global administrator to each Azure AD Tenant, not just the Azure subscription (see here for how to do that).

Next we need to add Mark (mark@sac.local) to the Larsen tenant. Head into Azure AD in the Management Portal, click Users, then Add:

Management Portal --> Azure AD --> Tenant --> Users --> Add

Management Portal –> Azure AD –> Tenant –> Users –> Add

You’re going to want to pick ‘User in another Windows Azure AD directory.’ Next we’ll type in mark@sac.local – if it works, you’ve done it successfully. You can add the user as a user, unless that user is going to be administering Azure AD as well.

If you get a message that the current user doesn’t have access to the directory, you’ll need to be sure the MSA has admin rights to both Azure AD tenants, using the info in the link a little higher.

You'll see this message if you're not using an account that's an administrator in BOTH Azure AD tenants.

You’ll see this message if you’re not using an account that’s an administrator in BOTH Azure AD tenants.

Add External Tenant user as Co-Administrator

At this point, you should be able to follow the directions from “Adding an Organizational Administrator” above, but be sure to use the new user you just added (e.g., mark@sac.local). This will allow mark@sac.local to administer the Azure subscription linked to the @larsens.local Azure AD tenant.

Confused yet?

Feel free to reach out with any problems.

Azure AD – the most basic of basics.

I’ve been speaking about Azure AD + cloud identity a lot recently, mostly at DevCamps along the east coast (which, by the way, if you’re near one you should come spend the day with us – I’ll be in Raleigh 3/17 and Charlotte 4/1).

Identity is a massive topic and as such, trying to cover even a big bite of it within an hour is difficult. I think we all take for granted all of the ‘behind the scenes’ work that goes on in ‘traditional’ identity systems, like Active Directory Domain Services. In a completely controlled domain environment, AD gives us all we’d ever need for user authentication, much of which end users never see. The wonders of Kerberos!

Unfortunately, a lot of these pieces don’t really work over the internet, nor do they work in untrusted (e.g., non-domain-joined machines). Imagine trying to resolve the netbios name of your domain via DNS on your iPhone – something tells me ‘CORP’ won’t resolve, and even if it does, chances are it won’t be the ‘CORP’ you’re looking for.

So how do we maintain the same fidelity of user experience on non-domain joined devices? Let’s take a brief history lesson. Modern identity really isn’t all that modern at all, at least conceptually – most of the same functions occur, but the underlying implementations are different (and more visible to the end user).

Kerberos

We’ll start with a hyper-simplification of what Kerberos does.

I need a TGT (Ticket to Get Tickets), which I can swap out for a Service Ticket. That service ticket I then use to get more tickets for authenticating to resources. In addition, the service I’m attempting to authenticate to needs to know how to understand the ticket and can validate the ticket.

In Windows, this process is highly transparent to the user. When I sign into a domain joined machine, I’m getting my TGT. Generally the TGT is longer-lasting so I can continue to request service tickets. When the login process is spinning, part of what you’re waiting on is getting the TGT.

I found a rad image from the late 90s (I especially love that hot-pink starburst) outlining the back-and-forth between a PC, service + Kerberos KDC:

Kerberos circa 2000. Credit: MSDN

Kerberos circa 2000. Credit: MSDN

 

‘Modern’ Federated Identity

Now let’s look at a modern federated identity platform and how it works:

Federated Identity cloud design pattern. Source: https://msdn.microsoft.com/en-us/library/dn589790.aspx

Federated Identity cloud design pattern. Source: https://msdn.microsoft.com/en-us/library/dn589790.aspx

In a federated identity system, we have a Secure Token Service that issues validatable tokens containing all of your claims (attributes – name, unique ID, UPN, etc). The service needing authentication is a Relying Party because it Relies on the STS for authentication. You, as the consumer, request a token (ticket) from the STS, which then in turn is used to authenticate to the remote service. The remote service trusts the STS and via a key exchange, can ensure the validity of the ticket. Sound familiar?

Trusted v Untrusted

So why does identity throw us as devs + IT pros completely off our game? Simple – until now, this process was almost completely transparent. Someone else had done all the hard work for us, mostly through NT/Kerberos. In a trusted environment, our machine is part of the domain so our machine authenticates and so do our users. In the untrusted environment (e.g., non-domain devices, the internet, etc), the entirety of the process happens over HTTP. HTTP is visible, because most users encounter it when they try to sign into a protected property. The remote web servers of SharePoint Online, for example, have no idea what I’m coming from, except that it’s an internet browser. There’s no intrinsic domain PC trust, there’s no visibility into that machine – so when a user tries to authenticate, these token swaps are going to happen, and the easiest way to do that with browser-based applications is by just sending the user from point-to-point in the authentication chain.

That’s why you’re going to see the Office 365 login page, followed by ‘Redirecting you to your organization…’ followed by your STS (like ADFS), before being redirected back into the target application. In a properly-configured environment, this should result in as-close-as-is-possible SSO experience.

What we’re really seeing here is modern identity platforms aren’t really all that modern at all – in fact, what makes them modern is merely the implementation (not that it is trivial, far from it, in fact) and the protocol used for communication is simply more visible to the end users.

I’ll have some more hands-on getting started around Azure AD soon.

Adding Existing VHDs to Azure Resource Groups

Azure Resource Groups. Simultaneously the most exciting and most frustrating part of Azure vNext. While powerful, today they’re quite inflexible – no API is exposed to allow editing info (like names), moving resources from one to another, really any management at all. And the sprawl – the sprawl! It’s awful. So many auto-generated resource groups. With a finite number allowed per account, actually using them appropriately is priority.

I elected to move some of my older deployments into a new virtual network, to gain the internal load balancer and to bring things on par with what’s current. For VMs, this is generally easy – delete the VM, keeping the disks, then recreate the VM in the new network. No big deal. Through this same way, I could also deploy these machines into the same resource group, containing similar resources.

Until I got into the new portal – I can’t, for the life of me, find any way within the new portal to create a new VM from an existing VHD. I dug through the PowerShell cmdlets for a bit, still couldn’t find much – particularly for adding that new VM into an existing resource group.

Side note: we still can’t upload our own Resource Group Templates? Really?

Clunk

For now, create a throwaway virtual machine in your resource group, with the proper cloud service name (DNS name, in the new portal), networking, storage account, etc. In the old portal (or through PowerShell), create your new VM from the existing VHD as you would normally, making sure to pick the cloud service which was created in your resource group. You can delete the throwaway VM now too. Check back in the portal a bit later and your existing VHD should now be in a new VM, in a resource group of your choosing.

Protecting WCF with Azure AD

Mobile services. MVC Web APIs. They’re all over and ubiquitous now. In some cases though, WCF is still the platform of choice for service developers. Sometimes it’s interoperability with other services, sometimes it’s just not wanting to rewrite old code – or perhaps a large part of your architecture requires service hosts + factories – whatever the reason, it’s not feasible to rewrite or rearchitect large swaths of systems just to add authentication.

Typical, 3-tier Apps

Let’s look at a typical three-tier app – UI, service + data:

Firewalls keep everyone out except our web app.

Firewalls keep everyone out except our web app.

Here, we’ve got a web app which talks to an unauthenticated service, which talks to some data. Pretty simple stuff. The box indicates the internet permeability – if the web server is the only thing exposed to the internet, this is a generally OK approach. If nothing has access to the service except the target consumer, what could go wrong? How hard could it be?

In this model, your web app is handling authenticating clients, which then proxies requests back to the service. A pretty standard model.

Tomorrow

But let’s extrapolate further. It’s 2015 – how many services only have a single web client anymore? Everything is connected and everything is slurping data from everything else. Not only are we going to have trusted hosts, we’re going to have mobile apps, perhaps we expose an API – there are lots of things to consider. Here’s how I’d expect our app to look from a modern looking glass:

Our service now has to handle multiple clients - and they're not all coming from a trusted host.

Our service now has to handle multiple clients – and they’re not all coming from a trusted host.

Our app has to handle some number of potentially unknown entry points.

So what are we to do? We can leverage OAuth server-to-server to secure our services. This way, we’re not publishing a static key into our mobile applications – as anyone who’s seen how trivial it is to decompile an Android app knows, you can never trust the client. There are two options – we’re going to dig into the first (application-only, 2-legged OAuth) – and we’ll follow up with 3-legged in a later post.

Server-to-server OAuth (e.g., 2-legged, Application-Only)

Our first option is:

  • significantly better than no authentication
  • somewhat better than static keys/shared credentials
  • useful for locking-down an API, but not necessarily at a user-level

This is application-only access, also known as two-legged OAuth. In this model, the server doesn’t need to know a specific user principal, but an app principal. A principal token is required by the service and is requested by the client:

  • STS-known client requests an OAuth token from STS (e.g., Azure AD)
  • STS-known client sends token in header (Authorization: Bearer eQy…)
  • Service expects header, retrieves token
  • Service validates token with Azure AD

User OAuth (aka 3-legged)

This option is somewhat different – instead of using an application prinicipal to connect to our service, we’re going to be connecting on behalf of the user. This is useful for:

  • applications that rely on a service to security-trim data returned
  • services that are public or expected to have many untrusted clients

In this model, users authenticate + authorize the application to act on his/her behalf, issuing an access token upon successful authentication. The application then uses that token for requesting resources. If you’ve ever used an app for Facebook or Twitter, you’ve been through a 3-legged OAuth model.

WCF Service Behaviors + Filters

There are two pieces we need to build – a server-side Service Behavior that inspects + validates the incoming token, and a client-side filter that acquires a token and stuffs it in the Authorization header before the WCF message is sent. We’ve used this pattern on a few projects now – this is a good resource for more details and similar implementations.

We need to do three things:

  • Update the WCF service with a message inspector that will inspect the current message
  • Update the WCF client to request a token and include it in the outgoing message
  • Update the WCF service’s Azure AD application manifest to expose that permission to other Azure AD applications

Service Side

Service side, we want something which can inspect the messages as they come in; this inspector will both grab the token off the request + validate it. This started life from the above blog post, but was modified for the newer identity objects in .net + WIF 4.5 and for clarity.

Some Code

Looking through here, we’ll find pretty much everything we need to make our WCF service ready to receive and validate tokens. The highlights:

BearerTokenMessageInspector.cs

Here we’re doing the main chunk of work.AfterReceiveRequest
is fired after the WCF subsystem receives the request, but before it’s passed onto the service implementation. Sounds like the perfect place for us to do some work. We’re starting by inspecting the header for the Authorization header, finding the federation metadata for that tenant, and validating the token. System.IdentityModel.Tokens.JwtSecurityTokenHandler
does the heavy lifting here (it’s a NuGet package), handling the roundtrips to AAD to validate the token based on the configuration. Take note of the TokenValidationParameters
object; any mis-configuration here will cause validation to fail.

BearerTokenServiceBehavior.cs

Next we’ll need to create a service behavior, instructing WCF to apply our new MessageInspector to the MessageInspector collection.  

BearerTokenExtensionElement.cs

This is a simple class to add the service behavior to an extension that can be controlled via config.

WcfErrorResponseData.cs

This is a helper for returning error data in the result of a broken authentication call. We can return a WWW-Authenticate header here (in the case of a 401), instructing the caller where to retrieve a valid token.  

Service Configuration

The last piece is updating the WCF service’s config to enable that message inspector:

Client Side

Now that our service is setup to both find and validate tokens, now we need our clients to acquire and send those tokens over in the headers. This is much simpler, thanks to the ADAL – getting a token is about a five-line operation.

AuthorizationHeaderMessageInspector.cs

The AuthorizationHeaderMessageInspector runs on a client and handles two things – acquiring the token and putting it in the proper header.

AzureAdToken.cs

This is a simple helper for acquiring the token using ADAL. You can modify this to pop a browser window and get user tokens, or using this code, it’s completely headless and an application-only token. ADAL also handles caching the tokens, so no need to fret about calling this on every request.

AuthorizationHeaderEndpointBehavior.cs

A wrapper to add the AuthorizationHeaderMessageInspector to your outgoing messages.

EndpointExtension.cs

A simple extension method for adding the endpoint behavior to the service client.

Usage

Wrap it all together, here’s what we’ve got – a simple call to ServiceClient.Endpoint.AddAuthorizationEndpointBehavior() and our client is configured with a token. Your call out should include the header, which the service will consume and validate, sending you back some data. Easy, right?!

Configuring Azure AD

The last thing we need to do is configure Azure AD with our applications. Those client IDs and secrets aren’t just going to create themselves, eh? I’m hopeful if you’ve made it this far that adding a new application to Azure AD isn’t taxing your mental resources, so I won’t get into how to create the applications. Once they’re created, we need to do two things – expose the permission and grant that to our client. Let’s go.

App Manifest

The app manifest is the master configuration of your application’s configuration. You can access it via the portal, using the ‘Manage Manifest’ in the menu of your app:
manifest

Download your manifest and check it out. It’s likely pretty simple. We want to add a chunk to the oauth2Permissions block, then upload it back into the portal:

What’s this doing, exactly? It’s allowing us to expose a specific permission to Azure AD, so we can grant that permission to other Azure AD applications. Head over to your client application’s Azure AD app record. Near the bottom of the ‘Configure’ section, we’ll see ‘Permissions to other applications’ – let’s find our service in this list. Once you’ve found it, you can grant specific permissions. Extrapolate this further, and you can see there’s certainly room for improvement. Perhaps other permission sets and permissions are available within our app? They can be exposed and granted here.

Ed note: It's finally out of preview!

Ed note: It’s finally out of preview!

It’s a trap Wrap

What you’ve seen is a ready-to-go example of using Azure AD to authenticate your applications. We’ll dig into using user tokens at both the application and service levels in a later post, but in the meantime, you’ve now got a way that’s better than shared credentials or *gasp* no authentication on your services.

Consolidating Services for Maximum Efficiency

Every day we’re bombarded with vendors, providers and *ahem* consultants telling us we need to break up our apps for maximum scalability & availability for the cloud. This is true – one of the keys to maximizing efficiency is breaking your applications down into units of work that can be scaled independently. This comes at a cost, however – imagine your Azure cloud project is made up of a dozen web services all spread out over a dozen web roles? That gets pretty expensive pretty quickly, especially if you’re targeting SLA – 24 instances for a dozen services?

Let’s say you’re migrating a few LOB apps to the cloud – does each of these need its own scalability unit? Perhaps they work in concert together, or perhaps no single application taxes the underlying servers more than a few percentage points at a time. Is this really the most efficient use of resources?

Breaking your application into smaller units on expected scalability boundaries is a best practice, without a doubt – but does that require that each service live within its own instance all the time? Let’s revisit our guidance and turn it into something more palatable and more explicit. We’ll look at two examples, reusing queue workers and stacking multiple IIS applications on a web role.

We’ll touch on two cloud pattern implementations – competing consumers + service consolidation.

Application Design vs. Deployment Design

We should always write and design our services in discrete scalability units – but how they are deployed is a deployment question, not a design question. If we write expecting each of these units to be in its own container (e.g., stateless, multi-instance), what hosts the code (and what else the host is hosting) becomes irrelevant until our scalability requirements dictate we move those units to individual hosts.

Multiple Personalities

In a complex application, reliable messaging is a must, especially as we start to break our application into multiple discreet services. Reliable messaging through queues is a standard pattern, but how do we design our queues and workers? Are they one-to-one between queue/message and worker implementation? Perhaps they are when we roll to production or ramp beyond pilot, but this is the cloud…why are we deciding this now?

Let’s start with a simple application – this application uses two queues, one for regular messages and one for exception messages. Each queue has queue workers that are dedicated to doing performing a task:

  1. for regular messages, the message is persisted to storage and a service is called.
  2. for exception messages, the exception is parsed and certain types of exception messages are transformed to regular messages and dropped back onto the regular queue.

How is our Azure solution arranged to accomplish this?

  1. Storage Account
    1. Regular Q
    2. Exception Q
  2. Cloud Service
    1. Regular worker
    2. Exception worker

They seem awfully similar, yes? Since we’re writing this code in its entirety, what’s to keep us from having a queue worker with multiple personalities?

Here’s our code today:

public interface IQueueManager
{
  object Read();
  object Peek();
  void Delete(object message);
}

public class MainQueueManager : IQueueManager
{
  //implementations
}

public class ExceptionQueueManager : IQueueManager
{
  //implementations
}

And the worker role’s Run() method. The ExceptionWorkerRole’s code would be remarkably similar, but in a separate role (thus incurring additional cost).

public class Worker
{
  public void Run()
  {
    while (true)
    {
      var queueManager = new MainQueueManager();
      var message = queueManager.Peek();
      var mainQueueProcessor = new MainQueueProcessor();
      mainQueueProcessor.Process(message);
      queueManager.Delete(message); 
      Thread.Sleep(1000);
    }
  }
}

This implementation is fine, but now we’re stuck with a single function queue worker – it reads a single queue and processes that message a single way. There are two specific behaviors we’re concerned with – the Queue which gets read, and the actions that happen depending on the specific message. If each of these is abstracted appropriately, our worker role can do whatever is appropriate under current load. This could be as simple as a condition in your worker role’s Run() method that checks all known queues, the type of message, then invokes one of a variety of implementations of an interface. Sound like a lot? It’s not. Let’s dive in.

Looking at all Queues

We’ll start with which queue to read from – we want to read from all queues that we get from configuration (or through DI, or however you choose to get them), so let’s abstract that away a bit:

public class Worker
{
  private IEnumerable<IQueueManager> queues;
  public void Run()
  {
    //get queues - create queue managers from config, pass from constructor, etc.
    queues = new List<IQueueManager>();
    while (true)
    {
      foreach (var q in queues)
      {
        DoQueueThings(q);
      }
      Thread.Sleep(1000);
    }
  }
}

All we’ve done here is allow a single worker to look into multiple queues – without changing any of our core queue management or processing code. The code that handles reading/peeking/deleting from the queue has all stayed the same, but our code can now run in a shared host with other queue readers. On to the queue processor.

Processing Different Messages

Now we need to do something depending on which message we get. This could be based on whatever criteria fits the need, for instance, perhaps the message data can be deserialized into a type with a specific flag. In our simple case, we’ll just look at the message data and check for a string.

private void DoQueueThings(IQueueManager queueManager)
{
  var message = queueManager.Peek();
  if (message.ToString() == "uh oh")
  {
    var eqp = new ExceptionQueueProcessor();
    eqp.Process(message);
  }
  else
  {
    var mqp = new MainQueueProcessor();
    mqp.Process(message);
  }
  queueManager.Delete(message);
}

Considerations

This is a very simple consolidation pattern, which allows each worker instance the ability to do more than one function – excellent for controlling cost or management. There are times this isn’t appropriate – make sure functionality is grouped with similar tasks – e.g., if one task requires high CPU, it may not be appropriate to scale with something processing a high volume of low CPU bound tasks.

We’re also implementing a simple competing consumers pattern (with a queue, not through our service consolidation), where some number of instances greater than one is capable of reading off of a single queue. This may not be appropriate when operations are not idempotent (e.g., function or task cannot be repeated without side effects – iterators are a classic example of this), or where the order of messages is important.

Stacking Sites in Web Roles

Previously we were looking at code changes we can make in our worker roles that can optimize efficiency – but these are all code changes. Next we’ll tackle IIS – this is all configuration, no code changes required.

Anyone familiar with running IIS knows you can run multiple web sites within a single IIS deployment – not only can multiple sites run on IIS, multiple sites on the same port can run using host headers. Azure provides you the same capability – through ServiceDefinition.csdef. It’s not immediately obvious in Visual Studio to accomplish this, but it’s quite easy once you’re comfortable with how Azure services are configured. There are two things we need to handle – one is the actual configuration of the web role, the other is making sure all sites are built and copied appropriately during packaging.

Solution

solution
Our solution configuration is pretty simple – an Azure cloud service with a single web role and two MVC web apps. We’ve also got a single HTTP endpoint for our web role on port 80.

ServiceDefinition.csdef

We’ll start in ServiceDefinition.csdef – here we’ll essentially configure IIS. We can add multiple sites, virtual directories, etc – for our purposes, we need to add an additional site. The ServiceDefinition.csdef probably looks a bit like this currently:

<WebRole name="AppWebRole" vmsize="Small">
  <Sites>
    <Site name="Web">
      <Bindings>
        <Binding name="Endpoint1" endpointName="Endpoint1" />
      </Bindings>
    </Site>
  </Sites>
  <Endpoints>
    <InputEndpoint name="Endpoint1" protocol="http" port="80" />
  </Endpoints>
</WebRole>

Pretty straightforward. Now we need to let the Azure fabric know that we’re hosting multiple sites within this web role. You’ll note there’s a ‘sites’ collection – here we’ll add our additional sites (I’ve changed the endpoint names to make them more readable). Let’s take a quick look at what’s been done:

  1. First – we’ve added the physicalDirectory attribute to the Site tag. This is important and we’ll dig into it in a moment.
  2. The bindings have been updated to add the appropriate host header. In this example, we want our main site to receive all traffic, so we’re using *
  3. The second site should only respond to specific traffic, in this case, app2.jpd.ms.
<WebRole name="AppWebRole" vmsize="Small">
  <Sites>
    <Site name="Web" physicalDirectory="..\..\apps\AppWebRole">
      <Bindings>
        <Binding name="App1Endpoint" endpointName="HttpEndpoint" hostHeader="*" />
      </Bindings>
    </Site>
    <Site name="Web2" physicalDirectory="..\..\apps\WebApplication1">
      <Bindings>
        <Binding name="App2Endpoint" endpointName="HttpEndpoint" hostHeader="app2.jpd.ms"/>
      </Bindings>
    </Site>
  </Sites>
  <Endpoints>
    <InputEndpoint name="HttpEndpoint" protocol="http" port="80" />
  </Endpoints>
</WebRole>

Packaging

Now that our service is all configured, we need to get our files in the right place. By default, the packager will package the other projects in entirety as part of the package. This is bad for a lot of reasons, but also removes some functionality – for instance, web.config transforms for projects outside the main project (the one associated with the web role) won’t happen, because msbuild is never called for that project.

There are multiple ways to accomplish this floating around the internet – some suggest updating the Azure ccproj to add additional build events as part of the msbuild script. I personally have used post-build events to locally publish to a subdirectory for each web project, something like this:

%WinDir%\Microsoft.NET\Framework\v4.0.30319\MSBuild.exe "$(ProjectPath)" /T:PipelinePreDeployCopyAllFilesToOneFolder /P:AutoParameterizationWebConfigConnectionStrings=false /P:Configuration=$(ConfigurationName);PreBuildEvent="";PostBuildEvent="";PackageAsSingleFile=false;_PackageTempDir="$(ProjectDir)..\CLOUD_PROJECT\apps\$(ProjectName)"

Make sure to change CLOUD_PROJECT to the name of your cloud project. Dropping this into your post-build event for each web project will build your projects, copy the output to the target folder (matching the project name, this could be changed) before CSPack builds the Azure package.

As we continue to see businesses take their first steps into the cloud, the service consolidation pattern is guaranteed to be a common sight – on-premise IIS servers stacked to the gills with individual web sites is a common pattern, especially for dev/test and low-priority LOB apps. While not advocating for inappropriately reusing existing service hosts, maximizing efficiency of the ones you have can greatly ease your first move to the cloud.

Updating ADFS 3 for WIA on Windows Tech Preview

If you’re using the Windows Technical Preview, you may notice that ADFS presents you with a Forms login instead of using WIA from IE on a domain machine. This little chunk of powershell includes most of the major browsers that support WIA – you can plunk this into your ADFS server and get it going:

Set-AdfsProperties -WIASupportedUserAgents @("MSIE 6.0", "MSIE 7.0; Windows NT", "MSIE 8.0", "MSIE 9.0", "MSIE 10.0; Windows NT 6", "Windows NT 6.4; Trident/7.0", "Windows NT 6.4; Win64; x64; Trident/7.0", "Windows NT 6.4; WOW64; Trident/7.0", "Windows NT 6.3; Trident/7.0", "Windows NT 6.3; Win64; x64; Trident/7.0", "Windows NT 6.3; WOW64; Trident/7.0", "Windows NT 6.2; Trident/7.0", "Windows NT 6.2; Win64; x64; Trident/7.0", "Windows NT 6.2; WOW64; Trident/7.0", "Windows NT 6.1; Trident/7.0", "Windows NT 6.1; Win64; x64; Trident/7.0", "Windows NT 6.1; WOW64; Trident/7.0", "MSIPC", "Windows Rights Management Client")

Why?

In version 3, ADFS tries to intelligently present a user experience that’s appropriate for the device. Browsers that support WIA (like IE) provide silent sign on, while others (like Chrome, Firefox, mobile browsers, etc) are presented with a much more attractive and user friendly forms-based login. This is all automatically handled now, unlike before where users with non-WIA devices were prompted with an ugly and potentially dangerous basic 401 authentication box (if they were prompted at all).

This means you can now design a login page for non WIA devices that might include your logo, some disclaimers or legal text.

IMG_0001

iOS 8x and ADFS 3

TenantDbContext for Table Storage

For anyone who’s used the ASP.net MVC templates with multi-organizational authentication, you’ll inevitably end up with a bunch of generated entity framework goo for keeping track of organizational certificate thumbprints for orgs who have logged into your app. This is lame. We’re creating two tables with a single column each in SQL?! I’ve never heard of a better use of table storage. Not to mention I’ve got to now pay for a SQL Azure instance, even if my app doesn’t need it.

This speaks to a larger issue – how frequently are we, as developers, using SQL by default? Do we really need relational data? Are we enforcing constraints in our service layer as we should? We are?! This makes SQL even more ridiculous in this scenario.

I decided to build one that uses table storage. You’ll need a few things

a) the source

b) update your web.config to indicate the issuer registry type

The VS solution is on github here: https://github.com/johndandison/net-table-issuer-registry

It’s dependent upon Azure configuration and Azure storage. Licensed under MIT, if you find it useful I’d just ask you drop me a line and let me know what neat thing you’re working on!

Smartphones Don’t Seem Very Smart Anymore

That headline may come across as rather spoiled, a la Louis CK’s always entertaining rant against people complaining about wifi on airplanes (you’re in a chair. In the air). The recent release of the giant iPhone got me reconsidering what I want in a phone, and it made me realize that Microsoft’s mobile offering is brilliantly ahead of its time, while its recent pivot is incredibly disappointing.

Rewind

I’ve flown the Windows Phone banner since 2011, when I first got a Samsung Focus. I had an iPhone 4 for work and while it was fine, the freshness of WP was irresistible. I still have that Focus, actually. Sure it was limited at first – not many apps, some rather glaring omissions (copy/paste, anyone)? But the live tiles were excellent, the integration with other Microsoft services was top notch and the hubs were excellent. In fact, the hubs were what sold me on Windows Phone. All of my social stuff in one interface? Music? Pictures? Each one had its own hub, dedicated to the function it was supposed to do. I even bought my wife one, an HTC HD7, for Valentine’s Day, plus a Zune Pass. It was great. Time went on, and more phones were released, some of which I bought, like the bright blue Lumia 900. That thing was a beast. Carolina Panthers blue, amazing screen and the fastest WP money could buy.

lumia900cyan

How slick is that? Great design, amazing to hold and looked fantastic.

Fast forward to now, I’ve gone through a few more, Lumia 920, 925, 1020 and (my current phone), a 1520, each one better than the one before.

The release of WP8 promised even more awesome stuff. We saw new tile sizes, BitLocker, IE10, SD card support – a slew of new stuff. Things were looking up.

Enter WP 8.1

The future. Credit: virtuaniz.com

The future. Credit: virtuaniz.com

WP 8.1 ushered in a new era of ‘completeness’ for WP. IE11 is onboard, including much broader HTML5 compatibility, plus support for all kinds of new sizes, internals and some pretty slick interface tweaks. Not to mention the Notification Center, and of course, Cortana.

But there’s another change WP8.1 brought in that feels like a massive step backward and plays to Windows Phone’s only significant weakness – third-party app support.

I know app counts and the like on WP are a running joke, but there’s a reason I bring this up – most of the hubs have been abandoned in favor of apps. Take the people hub, which I used to spend the majority of my phone time in – it’s now severely crippled by requiring an external app to source the info, as well as requiring that app to interact with the source that surfaced it (e.g., Facebook, Twitter, etc). The Me tile, which used to contain social notifications is now just a shell of what it was before – nothing useful at all, just a picture of myself and an option to check in. What does check in do? Open an app. sigh. I’ll come back to this later.

App Gap?

I picked up an iPad a week ago to do some Azure testing. I haven’t used an iOS device since the iPhone 4 in 2011, so I was curious to see how things had improved. I’ve got a Nexus 7 and a Nexus 5, which I used mostly for testing apps and to give my Google Glass a mobile connection (I keep the Nexus 5 in my bag). I’ve never been a huge fan of Android – I understand lots of people like it, but it’s just not for me.

But I missed the tiles – the home screen is still just a bunch of icons. No passing data, nothing – just icons. Android is the same way.

Anyway, so I’ve got this new iPad, and it’s pretty slick – but the quality and speed of the applications was immediately apparent. And I immediately got sad. Seeing the gap between the quality of iOS apps v. Android apps brought me to a pretty terrible conclusion:

If Android is this far behind iOS, Windows Phone is…never going to see quality apps.

Thus my sadness. As much as I love the platform, it just can’t compete when it comes to apps. How can I, as an indie dev (I don’t do any mobile at my current gig, at least not right now), get developers to give a shit about the platform? Facebook doesn’t care, Twitter doesn’t care – and judging by the quality of what’s in the stores today, developers don’t care either. It’s either a half-ass attempt at porting an existing app or it’s someone’s less-than-brilliant interpretation of Microsoft’s modern design language. In short, the vast majority of apps that exist on the platform are shit.

Function-centric vs App-centric

And here inlies the problem. WP can’t compete with apps. This is not opinion, it’s fact. Look at the ratings in the Windows Store(s) – they’re atrocious. Apps are consistently non-performant, released-once and never updated, don’t work or are just generally of poor quality. Lots of major services don’t even produce apps – and if they do, they are perpetually in beta (looking at you, Instagram), or Microsoft builds them themselves (e.g., Facebook). Sure, there are third-party developers who do amazing things, but they are few and far between (note – someone needs to make a Readit-style reddit app for iOS – Readit is easily the best of breed right now).

Go do a quick web search for ‘Xbox Music WP8.1′ – I’ll wait. Back? It’s definitely another casualty of the ‘let’s make everything an app’ decision. The reviews are atrocious, and even now, with a dedicated team + about a dozen releases, it’s still nothing like what it used to be.

Context Awareness is the ‘next big thing’™

But Windows Phone’s strength was always in context-awareness. The hubs focused on what you wanted to do, and surfaced relevant data and actions. The tiles, when pinned, were updated with relevant information based on what you wanted to do – want to pin a specific stock in your portfolio? Cool, pin away. Chat was unified between SMS, Windows Live, Facebook – seems familiar, eh Google Hangouts? WP had this in 2011.

Me tile notifications centered all of your social updates (retweets, wall posts, linkedin (lol) interactions) into a single feed. “I want to see what’s happening with my network,” you’d say, and the Me tile & people hub delivered.

Cortana takes context sensitivity to the next level:

Remind me next time I talk to my wife to ask her about the company picnic.

Next time I talked to my wife, I got a popup reminding me what I asked to be reminded about.

Next time I’m at Lowe’s, remind me to pick up 8 G2 halogen bulbs.

Upon arriving in the Lowe’s parking lot, I’m reminded (from the previous weekend, no less) to get my bulbs.

Even better, Cortana can remind me about flights, news, topics, weather, traffic, all kinds of things, based on my behavior and implicit/explicit metadata. Some things I told her about, others she gleaned from email, searches, messages, etc.

But Microsoft’s desperation to appease the masses and move WP has resulted in the ‘app for everything’ decision. It certainly has its merits, faster updates, more ‘xxx,000 apps in our store!’ ads, etc – but I think it weakens the core strength of what WP is all about.

It’s an app centric world…for now

So again, we’ve ended up in the prickly spot where Microsoft’s released something brilliant, but it’s the wrong time. iOS and Android are app launchers – there’s nothing inherently ‘smart’ about the OS. Sure, more things are starting to poke in, but for the most part, the ‘innovation’ is all left up to third-party developers. I don’t want to be in and out of apps all day long. I want to see what’s relevant at a specific time, or in a certain place, or…

And people don’t care about context. In emerging markets, just the fact that there’s a thing in your hand connected to the internet and it didn’t cost a fortune is a miracle in itself. Think those people are going to bitch about app quality or availability?

First-world consumers don’t appear to care about context either, at least right now. Hopefully, this changes, but Microsoft’s got to focus their message. It’s OK that WP doesn’t have 1 million apps.

Microsoft – reintegrate with major players. Bring back and modularize the hubs to allow third party access (or at least make it easier for you to maintain them) without all of the pain of these terrible third-party apps.

Integration and context are the next big winners. Opening 46 different apps to do your work daily will get tiresome. We don’t need apps for every website on the planet. We need focused, relevant information without the noise.

Unfortunately, I don’t know where the platform goes from here. Nadella’s clearly focusing mobile dev where the money is, as the Microsoft apps on other platforms are very good (seriously, almost every service I use has an outstanding app on iOS), while the WP equivalents are hit or miss (see Skype).

I swapped my 1520 for a friend’s 5s for a week while I consider a big-ass iPhone 6. I have no idea what I’ll end up using daily, but I’ll say this. This damn 5s is tiny. I have no idea how people use this phone daily.

Headless Azure AD User Creation

If you’ve spent any time with the Azure Graph API, it’s pretty sweet. Federated identity for the masses, with almost zero drama. Up until now I was mostly doing logins, queries, etc. with Azure AD, but for my latest project, I need to create both new domains and new users in those domains. I haven’t tackled creating new domains yet, because that looks like it’s going to be a royal PITA (automating powershell? ick) – but I kicked down the user path today. Went pretty well, until I got stopped cold adding a user.

Here’s some code

Adding a user with ADALv2 + the Active Directory Graph Client is pretty easy. Both are NuGet packages and simplify the process considerably. You can also post the JSON yourself, which you can find here on MSDN.

But I’m using ADGC, so here’s a quick snip of the required fields you’ll need to get a user created:


var gc = new GraphConnection(accessToken); //get this below
var pp = new PasswordProfile() //required
{
  ForceChangePasswordNextLogin = true,
  Password = "Watermelon1!"
};
var u = new User
{
  DisplayName = displayName,
  UserPrincipalName = upn,
  PasswordProfile = pp,
  MailNickname = displayName.Replace(" ", string.Empty),
  UsageLocation = "US",
  AccountEnabled = true,
  ImmutableId = Guid.NewGuid().ToString()
};
try
{
  var p = gc.Add(u);
  Console.WriteLine("Created {0}, immutable ID: {1}", p.UserPrincipalName, p.ImmutableId);
}
catch (GraphException ex)
{
  Console.ForegroundColor = ConsoleColor.Red;
  Console.WriteLine("{0}: {1}", ex.ErrorMessage, ex.ErrorResponse.Error.Message);
}

Pretty straightforward…until you get to

gc.Add(u);

chances are you’ll blow up with a 403 Forbidden. In fact, chances are high, like 100% this will happen (if it doesn’t let me know).

Graph Read/Write

For whatever reason, and I’m still trying to figure out exactly why, the ‘Read and write directory data’ permission doesn’t appear to allow adding users. I’m assuming this is because they want a user who’s in one of the principal management roles, like User Administrator (see this post for some info on that), as opposed to allowing app principals to do this. The long and short is that the Graph API wants you to go through an OAuth browser flow to delegate a token from a user with the appropriate permissions. If you’re using ADALv2, there’s no

AcquireToken

overload that’ll do this. This is fine, unless you want to automate the creation of these users.

What are you to do?

Fortunately, we can use the OAuth password grant_type to request a token with only a user’s username & password.

AccessTokens & the Graph

You’ll need a few things to get setup. I’m not going to go into much detail here, because if you’re encountering this issue chances are you’re already well setup. We need to request a token from the AAD STS, including both the user’s username/password, as well as the client ID and secret of the app you’re developing. Here’s a sample:

var reqUri = "https://login.windows.net/YOUR TENANT ID OR NAME/oauth2/token";
var postData = "resource=00000002-0000-0000-c000-000000000000&client_id={0}&grant_type=password&username=john%40mytenant.onmicrosoft.com&password=nicetry&scope=openid&client_secret={1}";
var wc = new WebClient();
wc.Headers.Add("Content-Type", "application/x-www-form-urlencoded");
var response = wc.UploadString(reqUri, "POST", string.Format(postData, AppId, EncodedKey));
var tokenData = JObject.Parse(response);
return tokenData["access_token"].Value();

Let’s deconstruct this request a bit, shall we?

https://login.windows.net/YOUR TENANT ID OR NAME/oauth2/token

This is where we’re posting our token request

PostData

The main chunk of our request.

resource The resource you’re trying to access. In this case, it’s the graph, which always has this ID: 00000002-0000-0000-c000-000000000000
client_id your app’s client ID
client_secret Important – make sure to URL encode your key before putting it here
grant_type password
username your UPN (e.g., blah@tenant.onmicrosoft.com)
password this should be obvious
scope Get data in the OpenID Connect format: openid

And make sure you URL encode the form/values before submitting, otherwise it’s 400s for you.

And that’s it

Provided you got a token back and didn’t have any problems with the request, you should be able to tack that access token into the header

Authorization: Bearer ...access token...

or you can stuff that into

new GraphConnection(accessToken)

if you’re using the Graph Client wrapper.

Create away! You’re off.

srs

« Older posts

© 2015 jpd.ms

Theme by Anders Noren, modified by jpd Up ↑