jpd.ms

on-prem = legacy

Azure Cloud Service Endpoint ACLs

Recently, Azure VMs got endpoint ACLs – this is a great addition and one of the biggest things I missed from AWS’ security groups. Using them on VMs is great and all, but what about cloud services? Since VMs are instances within a cloud service, it’s certainly possible, but how can we configure them as such? Fortunately it’s pretty easy.

No soup

First you’ll need to snag Azure SDK v2.3 and make sure your ServiceConfiguration.<env>.cscfg is at the latest schema (as of today, that’s 2014-01.2.3).

Head on in to ServiceConfiguration.Cloud.cscfg – these restrictions are obviously cloud-only – and add your chunk of config. Intellisense should pick this up and make it much simpler.

What’s nice is you can define your rules in total under AccessControls, then assign them as you need them to specific endpoints.

Here’s a sample allowing a few single IPs + a range and denying everyone else. These are executed in order, so be aware of the order tag.

<NetworkConfiguration>
<AccessControls>
<AccessControl name=”DenyAllExceptDevelopment”>
<Rule action=”permit” description=”stuff” order=”100″ remoteSubnet=”198.51.100.194/32″ />
<Rule action=”permit” description=”thing” order=”101″ remoteSubnet=”192.0.2.167/32″/>
<Rule action=”permit” description=”biz” order=”106″ remoteSubnet=”203.0.113.0/24″/>
<Rule action=”deny” description=”theinternet” order=”200″ remoteSubnet=”0.0.0.0/0″ />
</AccessControl>
</AccessControls>
<EndpointAcls>
<EndpointAcl role=”AzureService.Thing.Stuff” endPoint=”Endpoint1″ accessControl=”DenyAllExceptDevelopment” />
</EndpointAcls>
</NetworkConfiguration>

Denying Access through ADFS + Yammer

Start here if you haven’t already.

We’ll start with the last example – I’m piloting Yammer, I’ve got some users I want to grant access, but a whole lot more I want to deny. In the case of Yammer and likely some other RPs who don’t understand the Permit/Deny claim, you’ll have to manipulate something else to force the RP to boot you out. In Yammer’s case, they use the email address as the SAML_SUBJECT, which makes them pretty easy to poke.

Really, you should just update to ADFS 3.

But since that’s easier said than done, here’s how to make Yammer deny access to people using ADFS claims transformation rules.

Recursion? Did you mean recursion?

You’ll note from the previous post that we were denying users based on extensionAttribute1. This isn’t going to work any more, since Yammer doesn’t process the Deny claim, and punishes your insolence by stuffing you into an infinite redirect loop. The first thing you’ll want to do is remove any Issuance Authorization policies you have and put back ‘Allow all users.’

Next, we need to break users where extensionAttribute1 doesn’t equal false.

Persona Non Grata

In the case of Yammer, it’s easiest to just send in an invalid email address. Not invalid as in syntactically incorrect, but invalid for your organization.

That’ll give users a proper error message, informing them of their denial.

Two rules should do the trick (and you could probably get it down to a single composite rule) – one to transfer the email address to the SAML_SUBJECT and one to overwrite that claim if the user doesn’t have the requisite attributes.

To overwrite the claim (as opposed to adding a second value to the same claim), your issue statement should include the Issuer, OriginalIssuer and ValueType as the existing SAML_SUBJECT claim.

Something like this:

EXISTS(emailClaim:[Type == "http://schemas.microsoft.com.../emailAddress"]) && NOT EXISTS(c:[Type == "http://schemas.jpd.ms/unique/ad/Authorized", Value == "true"])
=> issue(Type = “SAML_SUBJECT”, Value = “FAKE@domain.com”, ValueType = emailClaim.ValueType, Issuer = emailClaim.Issuer, OriginalIssuer = emailClaim.OriginalIssuer);

Denying Access to ADFS-secured Applications

I’m going to have to make this a two-parter, because some company *ahem* Yammer – doesn’t appear to handle the Deny (http://schemas.microsoft.com/authorization/claims/deny) claim very well. By very well, I mean at all.

Here’s the scenario – you’re piloting an application, likely a cloud-based or service-based application, and it’s using ADFS to authenticate (think Office 365, Yammer, Salesforce, etc). The key word here is pilot - you have some users you want to deny access.

But let’s back up a second – ADFS is, in my opinion, proper for authentication, but not authorization. ADFS is a means to validate your identity, but not a means to grant access to resources. That’s true in the purest of forms, but when an application doesn’t offer a valid way to deauthorize users, sometimes it’s easier to go to the source.

In the case of Yammer, restricting users is painfully bad – particularly for an enterprise app. Microsoft bought Yammer almost two years ago, so we should hope that things will get better.

Let’s talk claim rules.

Claim rules let you do all kinds of fun stuff – from manipulating claims before they’re sent to the relying parties to even determining if that user is authorized to access that relying party (by not sending any claims, which effectively denies them access).

Unfortunately, claim rules also use some regular expressions, which make my eyes bleed. But no matter, we must press on.

A simple example.

Let’s start simple. I want to toggle access to a specific ADFS application using a value from an AD attribute on a user. For this example, I’ll be denying myself access to the Microsoft Identity platform (Office 365, Azure, MPN, etc – what could go wrong, right?), based on a value in extensionAttribute1 in my AD profile. Of course, if you are a real pro and have extended your schema in AD, you can, of course, use your attribute.

Anyway, so I’ve got this going on:

  • Relying Party: Microsoft Office 365 Identity Platform (this is what you setup to federate with O365)
  • AD Attribute: extensionAttribute1, value ‘false’

Everything else is vanilla ADFS.

For this we’re only using Issuance Authorization policies, but this same claim rule syntax is valid for other rule types as well.

Let’s add a new claim.

First thing – let’s create a new claim that we can use to stuff our value in. This isn’t absolutely required, but makes it a lot easier – and hey, maybe you’ll even use that claim value in your relying parties one day!

Here’s mine.

claim

To the relying party!

Now find your relying party. In my case, it’s Office 365. Let’s create some claim rules. Head over to your RP, right-click and edit your rules.

Here’s an important piece – claims are really only in scope within the tab they’re created on – for instance, if you’re in the Issuance Transform Rules tab, any custom rules you create (for instance, to assign an AD attribute’s value to a claim) are only valid within the scope of that tab. There may be some exceptions to that rule, but for the most part that’s how it’s compartmentalized. If you think about what you’re doing here, it makes sense – Issuance policies are different from transformation policies, etc – but I fell into this trap so hopefully you won’t have to.

In my example, I want to deny access based on a user account’s extensionAttribute1 being set to false.

First, we need to get a value into our fresh claim. You’ll need to do a custom rule, but it’s pretty simple:

c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname", Issuer == "AD AUTHORITY"]
=> issue(store = "Active Directory", types = ("http://schemas.jpd.ms/unique/ad/Authorized"), query = ";extensionAttribute1;{0}", param = c.Value);

Let’s dissect that a bit. No regular expressions! There’s a quick win.

We’re going to use AD to populate our new claim (http://schemas.jpd.ms/unique/ad/Authorized) from our extensionAttribute1 AD property. Simple right?

Since rules are processed in order by the rules engine, this rule needs to come first. Now our subsequent rules can use the value of that claim (Authorized) to make decisions on token issuance.

Here’s my next rule:

c:[Type == "http://schemas.jpd.ms/unique/ad/Authorized", Value =~ "^(?i)true$"]
=> issue(Type = "http://schemas.microsoft.com/authorization/claims/permit", Value = "PermitUsersWithClaim");

This one is so easy though that you can ‘cheat’ – just use the Permit or Deny based on Claim Value template - pick your claim (in my case, Authorized), set the value it should be equal to and you’re done.

In ADFS 3, you don’t even get sent to the relying party, you just get shut down at ADFS. I suspect this change is to support RPs that don’t know how to process the authorization claim.

Here’s my goofy ADFS login screen when my extensionAttribute1 is set to false:

denied

You can, of course, get crazy with your rules, but remember, you’re going to AD for this – and these rules don’t appear to be terribly ‘optimized’ in the sense that AD queries aren’t batched or anything. If you’re a high-volume identity shop, make sure your farm is well equipped to handle the extra load you can possibly put on your infra with complex rules. Next up – dealing with RPs that don’t understand Deny.

Still think you can do it better?

My wife’s work laptop is a joke. Although she has no administrative rights, it recently got infected with one of those ransomware-type viruses. I tried to help her out – what I found was pretty awful.

She works for a bank. A bank. Where people keep money.

a) Antivirus? Bah. AVG Free. Pretty sure that’s a gross license violation, not to mention it’s just so incredibly bush league.

b) AVG’s ‘safe search’ set as the homepage and locked to prevent changes. Interestingly enough, these all have a specific client ID attached to the URL. I almost wonder if they are getting search kickbacks…

c) Windows Updates only through WSUS.

WSUS isn’t bad on its own, but when it’s only available on-prem (or through VPN), you leave disconnected workers (like her) out of the patching process. An arguably minor violation, just connect to VPN and away we go.

That is, of course, provided someone is managing your WSUS.

I took these screenshots, undoctored, last night, 6/4/14.

Most recent check: 1/12/14.

Last installed: 8/18/12.

omg

But then it got really awful.

No updates available.

omg2

Let’s review - you can’t get updates from anyone except for your employer, but your employer obviously has people managing your systems who aren’t capable. It’s another reason that people, humans, end up getting in the way, be it hubris or ignorance. It’s dangerous, and in all honesty, Microsoft shouldn’t allow this to continue. There really should be some kind of dead man’s switch to allow people to get updates when a WSUS operator has effectively gone dead.

It’s really more of an indicator as to how bad things really are in the corporate landscape. Legions of people who mistakenly believe that the people they’ve hired to manage their infrastructure are somehow more capable than people who run massively scaled services for thousands (or millions) of customers daily. That’s not to say there aren’t bad eggs in the cloud space or diamonds on-prem, but those are exceptions to the rule. IT Managers and execs who see the cloud as a threat to their budget or headcount need to reevaluate – would you trust your personal information with these people? Would you trust your family’s livelihood with these people?

If the answer is no, it’s time to take a second look.

SharePoint Online + IRM + External Users

Since I can’t seem to find anything online regarding external users + IRM secured lists, I decided I should put it up here. In short,

External users using Microsoft Accounts can’t use IRM-secured documents that use an external client (e.g., Foxit).

There are some nuances, however. Some scenarios work, some don’t. I did all of this testing from a fresh, non-domain joined Windows 8.1.1 VM.

Scenario I: IRM Office docs + External Users

This appears to work. I shared an IRM lib with a Microsoft account and got to work. I could open and view the documents (Excel, Word & PowerPoint) in the Office Web Apps and the IRM restrictions persisted.

Scenario II: IRM PDF + External User + Foxit Reader

For managed PDFs, it’s not nearly as straightforward. Managed PDFs require one of two readers, Foxit or NitroPDF. I only tried Foxit, because Nitro wanted money. First, managed PDFs don’t open in the Word Web App (like they used to, hopefully that will come back one day), they require a client.

I tried to open the PDF from SharePoint, which prompted a download & open. Upon opening, Foxit told me I needed the AD RMS connector, which is a free download. Downloaded & installed that, tried again, then I needed the Microsoft Online Sign-In Assistant (MOSSIA) – another download/install. Did that.

The next time I opened Foxit, I was prompted by MOSSIA to sign in. Since the site was shared with my Microsoft account, I tried that. No dice – it just kept on kicking out my credentials. I tried app passwords, different Microsoft accounts, nothing.

I thought, perhaps it’s just broken, let me try the organizational account that belongs to the tenant which owns the SharePoint Online instance. This at least allowed me to login successfully, only to have Foxit kick me back out saying I didn’t have permission.

I killed Foxit and tried again – but now, my login information seemed to have persisted (granted, it’s what MOSSIA is supposed to do), so I was never prompted to login again. Fine, except that I couldn’t test any other accounts. Uninstalling MOSSIA didn’t help, so I’m guessing I need to whack some registry entries or some straggler files that are persisting my login information.

Scenario III/IV: IRM + External User (MSFT or Org) + Office 2013 client

I didn’t test this. It’s on my list, but I haven’t tried yet.

Scenario V: IRM PDF + External Organizational User + Foxit PDF

Also haven’t tried this yet. I’ll be really curious, but since the client I’m designing this for isn’t going to have external users with org accounts, it fell off the priority stack today.

A pseudo-solution

Since my specific parameters are IRM, PDF & External Microsoft account users, I’m left in a bind – there’s not a good story here. My parameters also are that the documents are read-only, so I found that if I convert the PDF to a Word doc and upload to the IRM-protected library, I can see it through the Word web app. That may not work for you, but it’s something to consider. It’s possible you could convert to some sort of an image as well, depending on your situation.

 

Azure Storage Queue names + 400s

Keep ‘em lowercase. They’re DNS names, so while the should be case-insensitive, they are, in fact not. So if you’re getting 400s creating queues (since Bad Request is so helpful, an all) – make sure all of your queue names are lowercase.

Here are all of the queue naming rules, from http://msdn.microsoft.com/en-us/library/dd179349.aspx:

Every queue within an account must have a unique name. The queue name must be a valid DNS name, and cannot be changed once created. Queue names must confirm to the following rules:

A queue name must start with a letter or number, and can only contain letters, numbers, and the dash (-) character.

The first and last letters in the queue name must be alphanumeric. The dash (-) character cannot be the first or last character. Consecutive dash characters are not permitted in the queue name.

All letters in a queue name must be lowercase.

A queue name must be from 3 through 63 characters long.

I migrated my blog this weekend

Needless to say, you’ve made it. I decided to move both johndandison.com/blog & wtfsharepoint.com here and to consolidate the content. There’s still some stuff lagging behind, but I think for the most part my links are properly sending 301s so hopefully it won’t be too bad. I decided to use BlogEngine.net a long time ago, thinking I was familiar with .net and might extend it one day. Three years later and all I did was change the theme…

So I’m off to wordpress now. It’s been pretty nice, you’ll notice none of my old posts are categorized, but hopefully I’ll get around to it.

My 301s are all coming from an Azure app, which has taken over duties for johndandison.com. I’ll post that (relatively awful, but simple) code up to github later, I suppose, if there’s interest. It’s a simple lookup in table storage, old post –> new post, do a 301, then update statistics. At $9/mo I’ll probably just leave that app up for a bit until Google & Bing have managed to reindex me.

There were two things that I didn’t expect, one has been solved and one hasn’t.

400 Bad Request using a URL as a RowKey

This was surprising but at the same time not.

I definitely wanted to use the Source URL as the RowKey. Since I could do direct lookups in table storage (PartitionKey + RowKey), this would be the fastest.

A URL is full of all kinds of stuff a REST service wouldn’t want pumped into it – so I decided to base64 encode my URLs, which would produce me a nice chunk of valid text I could stuff into the RowKey. Or so I thought.

Occasionally, base64 strings include a slash (‘/’), which is definitely not valid for a RowKey. Fortunately a quick answer emerged from someone else with the same problem – base64 encode, swap out the / for _ and do the reverse when pulling it back out. Brilliant!

MVC Routing + Running Managed Handlers for All Requests

I have some content folders in my site, mostly with some code assemblies/files, pictures, etc. I moved these to the new folder of my new domain and wanted my MVC app to issue 301s (although I’m not sure how useful a 301 is in a content case) and redirect to the destination. Definitely temporary, since the content links need to all be updated, but still a good safety net so my years of trash strewn about the internet continues to work.

The problem became with the routing. Making a request for http://johndandison.com/stuff/sbs/johndandison.SBS.StorageProvider.dll never hit my MVC app, it just 404′d. A 404 would be expected if I wanted StaticFileHandler to serve it, since the file doesn’t exist any longer on johndandison.com, which is the Azure app.

I tried forcing everything to run using runAllManagedModulesForAllRequests but my app still wasn’t being hit. Not sure if it’s a route problem or an IIS problem, but it’s definitely annoying. Since I’m on Azure web sites I haven’t really dug into it much more. I’m tempted to just write a quick ‘n dirty Azure worker role to listen for requests and spit out the redirect, but it’s a holiday weekend and I just haven’t gotten around to it.

Anyway – if you find something broken, let me know!

Shared Workstations, ADFS & SSO (or, just who the *hell* do you think I am?!)

An interesting problem came across my desk at WTFHQ this week. Then it asked me to drop trau and cough.

Shared Workstations.

Shared workstations. Used by those in the most chaotic of workplaces, medicine. Nurses & doctors going from patient to patient don’t have time to log out/log in to each machine they use, so in many cases, a ‘guest’ type user is logged on and everyone uses the browser to get stuff done. That’s all well and good, except when you’re talking about SSO & ADFS with Office 365. Whatever user you’re logged into the machine as is who ADFS will authenticate you as, regardless of what you type into the Office 365 login fields. You can see this for yourself - next time you login to O365, type HUGEWATERMELON@yourdomain.com – provided ‘yourdomain.com’ is correct, you’ll be redirected to your ADFS, at which point NTLM takes over and signs you in as whoever you’re signed into the machine as. Anyway, if you’re logged into a guest account, what do you think happens? If that guest account has a mailbox, you’ll go to the mailbox, which is almost certainly *not* what you wanted to do.

Forms as far as the eye can see.

Forms authentication in ADFS is one way around this problem, but it’s a pretty lame user experience for those on corporate PCs. I’ll hit O365, type in my UPN, get redirected, then type it in *again* with my password. Pretty awful, huh? Almost as awful as…well, going to the doctor in the first place.

Explanation of Benefits

There’s a solution however. It has some prerequisites on your part, but they shouldn’t be too hard.

Here’s a simple example – I’m going to use reverse DNS to get the IP of the request, see if it’s internal (or in a specific subnet, or whatever), and if the reverse DNS name has my domain name in it, I’ll assume it’s an internal PC.

Extrapolate that further and you could check if that PC is in a specific OU, or whatever. Alternatively, you could see if the user is a specific guest/service account and redirect that way.

Anyway, once you’ve decided what to do with your request, either to let them login with Forms or through NTLM, you need to make sure you redirect and
include the original query string. This is quite important, as it’s just not going to work without.

Rx

In ADFS 2.0 and 2.1, all of these files are in IISROOT\adfs\ls\FormsSignIn.aspx.cs. ADFS 3.0 uses HTTP.sys, I haven’t dug through it all yet but I’ll update when I get to it.

Send all requests to FormsSignIn.aspx by modifying the web.config to put Forms first.

<microsoft.identityServer.web>
    <localAuthenticationTypes>
        <add name="Forms" page="FormsSignIn.aspx" />
        <add name="Integrated" page="auth/integrated/" />
        <add name="TlsClient" page="auth/sslclient/" />
        <add name="Basic" page="auth/basic/" />
    </localAuthenticationTypes>
    ...
</microsoft.identityServer.web>

When the page loads, you need a way to differentiate requests based on *some* amount of information you have from the requesting party. This example used IP & reverse DNS, that’s kinda lame but you could get deep here and check AD for the computer’s OU, check the user for a group, etc.

protected void Page_Load(object sender, EventArgs e)
{
    var ctx = System.Web.HttpContext.Current;
    //get the query - this is critical, as it contains all the info ADFS needs to process the request
    var query = ctx.Request.Url.Query.ToString();
    //do whatever you need to figure out if the machine is shared or not. for this I'm just checking IP
    var results = System.Net.Dns.GetHostEntry(GetIp(ctx));
    //doing reverse DNS (since my DNS is authoritative for my house) and checking for a domain name
    //be able to support that differentiator in your infrastructure - if you filter on OU, for instance, make sure you can get to your AD.
    if (results.HostName.IndexOf("home.johndandison.com") > -1)
    {
        //if it isn't a shared workstation and needs to use integrated authentication, redirect to the NTLM endpoint
        //*maintaining the existing querystring*
        Response.Redirect("/adfs/ls/auth/integrated/" + query);
    }
    //there is no else, since they need to see the forms login page if they are on the shared workstation.
}

The ‘send my kids to college’ method

If you really wanted to do this the right way (i.e., you could apply a service pack and not worry about it getting overwritten), you could write an HTTP handler that does all of your custom logic, which is exactly what Microsoft is doing with their integrated
handler.

*cough*

Powershell, Office 365 & Authenticated Proxies

Short & Sweet

It’s the Friday before Christmas so I’m about to get outta here like the fat kid in dodgeball, but this came across my desk @ WTFHQ today. Using CSOM from behind an authenticated proxy? With events added in PS 2.0, it’s easy.

C#.

var securePassword = new SecureString();
"sharepoint password".ToList().ForEach(securePassword.AppendChar);
var ctx = new ClientContext("https://sharepoint site") { Credentials = new SharePointOnlineCredentials("sharepoint UPN", securePassword) };
ctx.ExecutingWebRequest += (s, e) =>
{
	e.WebRequestExecutor.WebRequest.Proxy.Credentials = new NetworkCredential("proxy username", "proxy password");
	//OR
	e.WebRequestExecutor.WebRequest.Proxy.Credentials = System.Net.CredentialCache.DefaultCredentials;
};
var web = ctx.Web;
ctx.Load(web);
ctx.ExecuteQuery();

Powershell. Not that different.

Add-Type -Path "C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\15\ISAPI\Microsoft.SharePoint.Client.dll"
Add-Type -Path "C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\15\ISAPI\Microsoft.SharePoint.Client.Runtime.dll"
$pwd = Read-Host -AsSecureString
$user= "sharepoint UPN"
$ctx = New-Object Microsoft.SharePoint.Client.ClientContext("https://sharepoint site")
$ctx.Credentials = new-object Microsoft.SharePoint.Client.SharePointOnlineCredentials($user, $pwd)

Register-ObjectEvent -InputObject $ctx -EventName ExecutingWebRequest -Action { 
	$request = $EventArgs.WebRequestExecutor.WebRequest
	Write-Host "Adding proxy to WebRequest, hold plz"
	$request.Proxy.Credentials = new System.Net.NetworkCredential("proxy username", "a password")
	#or, to use default credentials
	$request.Proxy.Credentials = New-Object System.Net.CredentialCache.DefaultCredentials
}

$ctx.Load($ctx.Web)
$ctx.ExecuteQuery()
Write-Host $ctx.web.Title

Let’s talk about auth, ba-by (or, Headless Auth to Office 365)

Authentication in SharePoint Online…now that’s a topic that’s been beaten all over the internet. The premier source for doing-what-you-need not doing-what-you’re-told​ is probably Wictor Wilen’s work on SharePoint Online, active authentication and yanking cookies off of requests. This is perfect for client/mobile/non-browser apps that need to do things with SharePoint Online.

Here at WTFHQ we use a lot of services. *Lots* of services. In fact, I can’t think of much that doesn’t call back into some service somewhere. How else would you ever do things? For example, since there are no timer jobs in SharePoint Online, you might have a scheduled task that runs on an internal server somewhere or a SQL job that needs to do some *stuff* to a SharePoint Online tenancy.

Active Auth vs. Service Accounts

Wictor’s way is cool and all, especially if you’re doing end-user type apps where someone will open the app, need to authenticate, a browser is shown, user logs into Office 365, cookies are yanked, app can now make calls on behalf of the user until that session/those cookies expire. In fact, if you’re doing an end-user app that requires user authentication, stop reading this now. You need to do exactly what Wictor is prescribing. In fact, if you *don’t* do it his way, your app will be bad and you should feel bad.

feelbad.jpg

Doesn’t really work too well for headless calls, though. Imagine if the Headless Horseman had to announce his arrival? It wouldn’t go too well.

So what’s a services developer to do? Fortunately, someone else has already figured it out.

Microsoft.SharePoint.Client.SharePointOnlineCredential

You’ll see this class thrown around a lot in PowerShell circles. Not that it’s a bad thing, it just is.

It is exactly what it looks like - a username/password pair of SharePointOnlineCredential (who knew?). Using this, you can get an authenticated ClientContext – from there it’s the same ClientContext code you love to PropertyOrFieldNotInitializedException

Usage

Here’s a quick and dirty sample I threw together yesterday for some peeps at work. It’s so easy, a VB developer could do it:

var sharePointUrl = inputArgs["Url"];
var password = inputArgs["Password"];
var username = inputArgs["Username"];

Console.Write("Connecting to {0} as {1}...", sharePointUrl, username);

var securePassword = new SecureString();
password.ToList().ForEach(securePassword.AppendChar); //don't hate on my greedy memory usage: string --> list --> securestring

var ctx = new ClientContext(sharePointUrl) { Credentials = new SharePointOnlineCredentials(username, securePassword) };
var web = ctx.Web;

ctx.Load(web);
ctx.ExecuteQuery();

Console.WriteLine("done.");

Console.WriteLine("The web at {0} is named \"{1}\" and here are some of the lists: ", sharePointUrl, web.Title);
ctx.Load(ctx.Web.Lists, x => x.Include(y => y.Title, y => y.ItemCount, y => y.LastItemModifiedDate));
ctx.ExecuteQuery();
foreach (var l in ctx.Web.Lists.ToList())
{
  Console.WriteLine("List {0} has {1} items and was last modified at {2}.", l.Title, l.ItemCount, l.LastItemModifiedDate);
}
« Older posts

© 2014 jpd.ms

Theme by Anders Noren, modified by jpd Up ↑