Simple IIS Kerberos Q&A

Posting a hopefully-useful tidbit.

Hi Tristan,

Do you have by any chance a guide on how to set up IIS for kerberos auth? I’m helping my customer and I’m a beginner with IIS.

It is a farm of 6 IIS servers, they will be using a service acct.

DNS is configured to do the following resolution:

Websvr -> CNAME -> IP

So for instance the web site is webapp.example.net and points to a CName. The CName obviously is an fqdn (app-prod-vip.example.net) that points to an IP.

The IP points to the VIP of a load balancer that ultimately connects to the IIS server farm.

When setting the SPN do we use the websvr or the CName?

Also, does it matter the browser I’m using on the client for kerberos auth (such as chrome)

Anything special on the web server, besides configuring Windows authentication?

Thank you!

 

Here’s what I replied with:

 

Hola!

Couple of moving parts there – it (a different name, i.e. the load balancer name) won’t work with the default configuration.

You’ll need to ensure that the SPN for the CNAME is only assigned to the service account running the App Pool. If it’s on more than one account, it’s broken.

A DA needs to run:

SetSPN -S http/cname-of-app.fqdn.com DOMAIN\AppPoolAccountName

Where DOMAIN\AppPoolAccountName is the service account you set up for the application.

And that should get kerb where it needs to be from an SPN perspective. If other SPNs have been tried already, they need to be removed (and SetSPN -S should tell you that).

(Once you’ve established an SPN for the account, the Delegation tab should appear for it in ADUC. This allows you to configure constraints or delegation, which you might not be doing, so we’ll cover that last.)

Next, you need to ensure the App Pool Account is set to DOMAIN\AppPoolAccountName (i.e. the same “custom” domain account) on all the boxes. (ApplicationPoolIdentity or NetworkService or LocalSystem or anything other than a Domain account won’t work for load-balanced Kerberos authentication.)

Then, you need to either

  • disable Kernel-mode authentication, or
  • set useAppPoolCredentials=true

on them all.

There’s a tickbox for K-mode auth under Windows Authentication in IIS; or useAppPoolCredentials goes (I think) in web.config so might be preferable. What either of these does is to move from using the box identity (machine account) to validate tickets, to using the App Pool Account to validate tickets. This is required for a farm scenario, but for a single-box scenario, it’s not necessary (only SPN registration).

Once that’s done, Kerb should work to the websites, which can be validated with a network trace, or by looking at logs. (let’s throw in a reboot after k-mode auth is toggled off for good measure) (Picking Kerb in logs – short version: single 401 www-auth:negotiate/request with long ticket/200 is Kerb, 401/401/200 is NTLM).

I’d always test with IE, I *think* if IE works then Chrome has a good chance. If it doesn’t, no chance Smile

Always test from a remote box (avoids reflection protection), and use klist purge (and a closed browser) to reset between tests.

If Kerb works to the site, you can then configure the App Pool Account in ADUC for constrained delegation to the next hop in the same way. Hit Add, browse for the process identity it’s connecting to (i.e. often the service account if the process is running as a domain identity, not the box name, but if not, the box name) and then pick the right SPN from the list.

HTH!

Tip: Check that your Offline Root CA is actually Offline, mmkay?

I spend a fair whack of time chatting PKI and certificates with customers, and tyre-kicking their environments as part of the Active Directory Certificate Services Assessment (or ADCSA – available via Premier Support).

Many customers have a fairly standard design, often deployed by a partner (it’s the “off the shelf plus customize” option), which includes an Offline Root CA, and one or more online Issuing CAs.

The Offline Root purely exists to sign Issuing CA certificates and publish a CRL occasionally, and is typically airgapped if it’s physical. The Issuing CAs are the ones which are typically connected to directly (or via a Web Service) by client computers.

What’s perhaps underemphasised in some designs is the inferred meaning of the word “Offline”. “Offline” means “no network cable”.

 

Pros of an Offline Root CA:

– Because it’s airgapped (i.e. has no network cable):

– You don’t have to service it (i.e. patch, service pack, update – except for reliability issues)

– You don’t have to closely manage its operational health – just boot it up once a quarter, copy a CRL to local storage, and shut it down

– It gets to use separate credentials from the rest of AD, so it’s isolated from credential attack

  – Also, remember, no network cable = limited network attack surface, right?

 

Cons of an Offline Root CA:

– Because it’s got no network cable:

– It’s harder to manage than an Online Root CA, which it becomes if you plug a network cable into it.

  – You need to use console access if it’s on a VM host (which, broadly, is an iffy idea at most organizations. Yes, probably including yours.)

  – You need to use virtual floppies, or real floppies, or probably more likely USB sticks to transfer files to and from it.

 

Okay, so you trade no network cable and hands-on management for greatly improved security, with the goal of keeping your root’s private key safe.

Airgapped is pretty safe: no network cable is a fairly heavy defence against the casual network-based attacker!

 

But…

But: at some point, due to error or oversight, manyOffline” Root CAs get attached to the network. Maybe because a new admin wants to RDP in for some reason, maybe because it’s more convenient when trying to publish the CRL. Whatever. Now, we can build a list of pros and cons for your Online Offline CA:

 

Pros and Cons of an Online Offline Root CA:

It’s one of the most vulnerable hosts on the network, because it’s not patched, because it’s not part of any patch group or configured for WSUS, and you don’t need to patch Offline CAs, right?

– Does it have antivirus or firewall or even local policy applied? Probably not, because it’s been designed not to be network-attached. (Not that settings or AV or software defences beat an unpatched exploit, but let’s suggest they might help with some common attacks.)

– It’s easier to manage, because it’s network-accessible. So, yay that!

 

The Good News:

Hah! Just kidding! If you’ve read carefully this far and think there’s any good news, my communication skills have failed you. There’s not, and depending on your other security practices and the assurance level required for certs issued by this CA, you should think carefully about starting over with provable key provenance.

If your Offline (unpatched!) CA has been Online for any period of time, the Root’s private key provenance is unclear. Could the machine have been exploited, and the private key leaked, undetected? Unpatched security vulnerabilities make that more likely. Lack of close auditing reduces assurance. The primary security control in place – i.e. no network cable – was removed.

But how useful is a root private key, really? Well, an attacker can simply use it to mint any certificate which it’s likely your organization’s computers will trust, for any purpose, undetectably. The use of such a cert would be basically invisible to your organization without unbelievably close monitoring, the type of unbelievably close monitoring you probably don’t have if an Offline CA ends up Online for any length of time.

 

So what do I do?

Check now. Don’t wait for me to show up and look horrified/disappointed! Or worse, tell jokes. You wouldn’t like me when I’m funny.

If you’re running a physical computer as an Offline CA, it’s pretty straightforward. It’s hopefully in a safe, or in another secured location where you can either prove it’s airgapped or it’s very easy to infer that it is (nobody’s in the safe; computer is airgapped).

If you’re running on a virtualized OS, it’s murkier. Maybe someone adjusted the virtual network settings; maybe it’s attached to a fake network; maybe it’s unattached completely. Virtualized CAs have many other security implications which need attention if you’re serious about your PKI.

So quick test: If you can RDP to the CA from an internal network client, it’s not airgapped. And it’s not Offline. And if it’s not Offline, and it’s not being updated with all haste at the start of each month, odds are it’s been vulnerable to known exploits for a period of time already (without restrictive firewall policies – but if you think about an RDP vulnerability, maybe even with restrictive firewall policies in place…).

There’s an extensive white paper on Securing Public Key Infrastructure which helps talk through many important aspects of CA security.

But this is something I’ve seen in the wild, and it’s scary.

So as a final note: if you come across something called “XXQ-BLAH-CA01” in a VM console and it doesn’t seem to be network connected: Leave it unplugged!

 

(Also, please avoid deleting it. No, there’s no story there. Why would you ask?)

Huh.

You find lots of draft posts when your user interface changes…

Hi! I’ve been waiting for my blog to migrate for what seems like forever. Now it’s back, and Open Live Writer’s a thing, and so I guess I might be back too. Sweet. Somewhere to jot things and rant a bit.

FAQ: Yep, still alive.

Hyper-V Synthetic Networking Is Much Faster

I decided it was time to look at upgrading my home broadband, mostly to get better-than-128KB upload speeds.

After the ISP-side change had eventually wound its way through, I was interested to see that while my upload speeds had improved, my download speeds still seemed capped at about 25Mbits/s (rule of thumb: divide by 8 to get megabytes per sec, so about 3MBytes/sec), despite a new modem and speeds that should’ve been “up to 100Mbit/sec”.

Before launching a tirade at the ISP, I re-plugged my laptop directly into the cable modem, and got around 120MBit/sec test results from speedtest.net, so knew it wasn’t the ISP-side plan itself rate-limiting me any more. And thus began the investigation.

A packet walks into a bar

My setup looks basically like this:

(regular internal networks on 10.x subnets) -> [ TMG -> DMZ -> Smoothwall ->] Cable

With a “side path” for the Xbox, which goes (10.x) -> Smoothwall -> Cable, to benefit from the UPNP mapping feature of Smoothwall.

Technically, any UPnP device or app can use the same side path, but “regular” browsing by WPAD-enabled stuff can benefit from TMG’s web proxy cache to some extent.

Any testing I performed through Smoothwall led me back to the 25MBit capped rate. I checked QoS was off, which it was, and then decided I’d eliminate Smoothwall and try pushing the laptop through TMG directly (bypassing the Smoothwall) as a test.

The test gave me ~115MBit/sec+. Good enough!

Leaping to conclusions

The complicating factor is that everything to the right of the internal networks above is virtualized on my WS2012 R2 Hyper-V box, and I guessed that the rate limiting might have something to do with the legacy network adapters in the Smoothwall VM (one each for internal and external, plus a perimeter NIC for fun). My TMG VM uses synthetic NICs only, so I assumed that it was probably the key performance improvement when using it. The CPU on the router VM didn’t seem to be breaking 0% (externally; 4% is 100% of one core), so I guessed that was a sign the cause might’ve been semi-external, or in the hypervisor.

Rather than work out what performance counters I could use to prove this, I figured I’d try to get the legacy NICs upgraded to the new Hyper-V synthetic NICs instead.

And after reading the vast array of “here’s how you compile XYZ into your kernel” articles on it, I figured that it sounded way too hard (especially as I was using a now-legacy copy of SmoothWall Express 3.0) and finding some kind of distro which worked with Hyper-V enlightenments was probably the easiest route! No pun intended.

How do you test that? Easy, you just leave a VM at default settings – which include a new NIC – and see if it can detect the adapter. If it can, great! If it can’t, try another!

My Little Firewall (packets are magic)

First, I did a little research as to what might support Hyper-V natively, and found pfSense 2.2, based on FreeBSD 10.2, which has inbuilt Hyper-V integration services. I installed it, it found the new NICs, and after minimal faffing around, I had a working firewall, which speed tested up around 115-120MBit/sec. Boosh!

pfSense looks really capable and has lots of new options to explore, but I wondered whether upgrading to SmoothWall 3.1 (which I’d been putting off) would have yielded similar results? I don’t have a huge rule set to migrate, so I started building a fresh Smoothwall Express 3.1 VM from the 200MB ISO, and after configuring my networks from scratch, tried that… also a nice, fast result with synthetic NICs.

So I now have a choice of external firewalls (and whether to bridge the cable modem again, which I like: fewer moving parts to manage), with speedy integrated networking components, and I’m generally quite happy that non-Windows distros seem to be integrating the Hyper-V components for faster-than-legacy networking!

[Update and aside 2017-03-03 – I eventually added a scooter computer to the lineup as an always-on low-power host, and put PfSense onto it “bare metal”. But it quickly became apparent that the Realtek NIC driver included in the box was pretty suboptimal, and would drop out under mild load. So I’m now running Server 2016 on the tiny host, and PfSense in a VM on it, and it’s fast and reliable using the Windows Realtek NIC driver virtualized through Hyper-V. Ironic fun!]

MSDEToText for TMG sometimes produces negative IP addresses

… which can be annoying when you're trying to work out where your traffic's headed, with something like LogParser.

I fixed my MSDEToTSV (note, I renamed it so that it reminded me what format it produces – I only need to use it once in a blue moon), and I'm posting the fixed version here in case you'd like a copy.

The edits were just to the IpFromDbl function.

MSDEToTsv.zip

Installing Windows from a USB Stick (and copying it to that stick from an ISO)

(Quick public dump so I can find the tool link again easily)

 

  1. Download the Windows 7 USB/DVD tool from here. (NB links to the page, not the EXE)
  2. Download ISO image from store or MSDN
  3. Run Win7 USB DVD Tool and point it to ISO and USB Stick
    • Label stick
    • Seriously Tristan, label the damn stick
    • You know what happens when you don’t.
  4. Boot from stick, or run Setup from stick
  5. ???
  6. Profit!

Works for Windows 8, worked for Windows 8.1 Preview, no reason to think it won’t work for Windows 8.1 RTM, or Windows Server 2012 R2 etc.

Good if you’re planning a clean installation; otherwise, simply double-clicking the ISO is probably enough to do the upgrade as of Windows 8, which can mount ISOs and VHDs from a double-click.

Office Web Viewer

While Skydrive’s been able to display Office documents in your web browser for what seems like ages now, check this out:

Do you have Office documents on your website or blog that you want your readers to view even if they don’t have Office installed?  Would you rather view a document before downloading it?  To give your audience a better experience, try the Office Web Viewer.

What is the Office Web Viewer?

It’s a service that creates Office Web Viewer links.  Office Web Viewer links open Word, PowerPoint or Excel files in the browser that would otherwise be downloaded. You can easily turn a download link into an Office Web Viewer link to use in your website or blog (e.g., recipes, photo slide show, a menu, or a budget template).

Some benefits of the Office Web Viewer include:

  • You don’t need to convert Office files for the web (e.g., PDF, HTML).
  • Anyone can view Office files from your website or blog, even if they don’t have Office.
  • It keeps eyes on your website or blog, because readers don’t need to download the file and they stay in the browser.
  • One link will work for computers, tablets, and mobile phones.

Yup, it’s a service which creates viewable links to Office documents, in a browser.

App Pool Recycling Defaults: Why 1740 minutes?

Without doubt, one of the most FAQ when discussing Application Pools in IIS Admin and Troubleshooting workshops!

Scott Forsyth shares Wade’s answer

In Wade’s words: “you don’t get a resonate pattern”.

then follows up with useful advice on establishing your own best recycling interval:

First off, I think 29 hours is a good default. For a situation where you don’t know the environment, which is the case for a default setting, having a non-resonate pattern greater than one day is a good idea.

However, since you likely know your environment, it’s best to change this.

Lots of useful information and commentary in a post that’s sure to become a genre classic. Really. No sarcasmo.

IIS 7: But why do I get a 500.19 – Cannot add duplicate collection entry- with 0x800700b7 !?

(Because I’m sure that was your exact exclamation when you hit it!)

Also applies to IIS 7.5 (Windows Server 2008 R2), IIS 8.0 (Windows Server 2012), IIS 8.5 (Windows Server 2012 R2) and IIS 10 (Windows Server 2016).

The Background

This week, I was out at a customer site performing an IIS Health Check, and got pulled into a side meeting to look at an app problem on a different box.

The customer was migrating a bunch of apps from older servers onto some shiny new IIS 7.5 servers, and had hit a snag while testing one of these with their new Windows 7 build.

To work around that, they were going to use IE in compatibility mode (via X-UA-Compatible), but adding HTTP response headers caused the app to fail completely and instantly with a 500.19 configuration error.

We tested with a different header (“X-UA-Preposterous”) and it had the same problem, so we know it’s not the header itself.

“Now that’s interesting!”

At first I thought it was app failure, but as it turns out…

The Site Layout

This becomes important – remember I noted that the app was being migrated from an old box to a new one?

Well, on the old box, it was probably one app of many. But the new model is one app per site, so a site was being created for each app.

The old location for the app was (say) http://oldserver/myapp/, but the contents were copied to the root of http://newsite/ on the new server.

To allow the app to run without modification to all its paths, a virtual directory was created for /myapp/ (to mimic the old path) which also pointed to the root of newsite.

image

So myApp above points to c:\inetpub\wwwroot , and so does Default Web Site.

Setting up the problem

So, using the GUI, I set the X-UA-Compatible header to IE=EmulateIE7. The GUI wrote this to the web.config file, as you can see in the status bar at the bottom:

image

Browsing to a file in the root of the website works absolutely fine. No problem at all.

Browsing to anything in the /myApp/ vdir, though, is instantly fatal:

image

If you try to open HTTP Response Headers in the /myApp/ virtual directory, you also get a configuration error:

image

What does that tell us? It tells us that the configuration isn’t valid… interesting… because it’s trying to add a unique key for X-UA-Compatible twice.

Why twice? Where’s it set? We’re at the site level, so we checked the Server level HTTP Response Headers… blank.

But… it’s set in a web.config file, as we can see above. And the web.config file is in the same location as the path.

Lightbulb moment

Ah. So we’re probably processing the same web.config twice, once for each segment of the url!

So, when the user requests something in the root of the site, like http://website/something.asp:

1. IIS (well, the W3WP configuration reader) opens the site’s web.config file, and creates a customheaders collection with a key of X-UA-Compatible

2. IIS Serves the file

And it works. But when the user requests something from the virtual directory as well – like http://website/myApp/something.asp

1. IIS opens the site web.config file, and creates a customheaders collection with a key of X-UA-Compatible

2. IIS opens the virtual directory web.config file (which is the same web.config file) and tries to create the key again, but can’t, because it’s not unique

3. IIS can’t logically resolve the configuration, so responds with an error

Options for Fixing It

1. Don’t use a virtual directory

(or rather, don’t point the virtual directory to the root of the website)

This problem exclusively affects a “looped” configuration file, so if you move the site contents into a physical directory in that path, it’ll just work.

There will be one configuration file per level, the GUI won’t get confused, and nor will the configuration system.

Then you just use a redirecting default.asp or redirect rules to bounce requests from the root to /myApp/ .

2. Clear the collection

You can add a <clear /> element to the web.config file, and that’ll fix it for any individual collection, as shown here:

<customHeaders>
      <clear />
<add name=”X-UA-Compatible” value=”IE=EmulateIE7″ />
</customHeaders>

The clear element tells IIS to forget what it thinks it knows already, and just go with it. (When you break inheritance of a collection in the GUI, this is what it does under the covers).

The problem with this approach is that you need to do it manually, and you need to do it for every collection.

In our case, we had Failed Request Tracing rules as well which failed with the same type of error, promptly after fixing the above problem, confirming the diagnosis.

3. Move the configuration

And this splits into two possible approaches:

3a. Editing Applicationhost.config by hand

You can remove the web.config and use a <location path=”New Site/myApp”> tag in applicationhost.config to store configuration, and that’ll work until someone uses web.config again.

3b. Using Feature Delegation

If you do want to prevent web.config being used, you can use the Feature Delegation option to make all properties you don’t want people to set in web.config files Read-Only. (aka “lock” those sections). “Not Delegated” would also work.

image

This can be done per-Site using Custom Site Delegation, or globally.

And! This has the added happy side-benefit of making the GUI target applicationhost.config, rather than creating or modifying web.config files for that site.

 

Hope that helps you if you hit it…