Sunsetting TMG 2010 with some (free!) Best Practices

Long and boring post ahead. So: KITTENS! There. Fluffy now.

As one of the Premier Field Engineers performing ISA Server Health Checks and then Threat Management Gateway (TMG) configuration reviews (by default, from my long association with Proxy 2.0 and then ISA), I was reviewing a document I put together for a customer just before shredding it, and thought:

You know what? Everyone should do these things! These recommendations are common enough that I seem to make them every time I see a TMG box… so why not generalize and recommend them here? Put them out into the wild. Get them shouted down. Give them their time in the sun.

So on the off-chance you’re a survivor of the TMG Survival Guide and you’re looking for some last-minute as-seen-in-the-real-world TMG corrective advice – and by “last minute”, I mean:

  • You know the base product is in Extended Support until 2020, then it’s going away. (sniff!)
  • You understand that Malware Scanning and Network Inspection System are already frozen at their last update level.
  • You know URL Categorization (Filtering) got turned off already so any rules using it might fail-open (or fail-closed)…

And in terms of pre-migration work

  • You’ve also been through your rule set, and tested that everything’s Least Privilege-compliant,
    • i.e. No broad “everyone can access anything/TMG/anywhere with any protocol” rules or anything like that.
      • No really, if you can connect to TMG via SMB, that’s usually not a good sign… You’re at least using Windows Update for patches, though, right?
  • Maybe you’ve performed an ISAINFO (and/or TMGBPA) export of your rule set so that you can ease the process of recreating them on the next egress device you pick? 🙂

…Because these are all fantastic first steps on the long migration path between proxies. If you haven’t done them, do put them on the list.

So before you shut down TMG that final time, and repurpose the boxes for Quake servers (or whatever you kids use spare boxes for these days)…

What best practices are available to you do in the meantime? Glad you asked!

Here’s the short list, the detail follows.

Proactively Protect The Box

  • Install the latest Windows Updates
  • Install the latest TMG Rollup Hotfix (SP2 UR5, potentially + .650 or later)
  • (Install any updates for any other software on the box)

Operating System Protection

  • Firewalling
  • De-Adminning
  • Attack Surface Reduction
  • AV exclusions

TMG Health and Perf

  • Check Tracing isn’t enabled
  • Disable/Relax Flood Prevention

And now the details…

Proactively Protect The Box

“It’s a firewall, it doesn’t need patching!(just for clarity: that’s not true)

Install the latest Windows Updates

  • If you’re not installing Windows updates, um, I don’t know what to tell you?

You understand that unpatched vulnerabilities win over security settings, permissions and antivirus, right? Any on-box control is potentially circumvent-able by an unpatched (bad) vuln?

And you’re still thinking it’s optional? Well! That’s nice! I hope you’ve a mitigation strategy in place, and an incident response plan for when that one fails.

TMG defends itself pretty heavily against network attack ( (a: by default) (b:to an extent; it still leverages OS components for certain chunks of functionality)), but lots of people end up creating rules which – paraphrased – allow the Internal network to hit any port on the TMG computer. Because reasons!

This is the same pathology which leads people to not patch their CAs, or not to use firewalling between hosts on their internal network – it’s the opposite of a defence in depth approach!

Anyway, back to updates:

  • When I check the update state of a box, I do so by running MBSACLI (the command-line version of MBSA) using the current WindowsUpdate CAB if the box doesn’t have Internet connectivity.

mbsacli /xmlout /nvc /nd /wi /catalog .\ /unicode > %computername%-MBSA.xml

    • I actively avoid using the default customer WSUS catalog, because it’s completely possible to be 100% compliant with the WSUS approval policy and have unapproved updates missing from five years ago, which were skipped for a good reason, but then that decision was never revisited.
  • It is uncommon in my experience to find that servers are up to date. For a security appliance at the edge of the network, used as an ingress or egress point by thousands of clients, this is suboptimal.


Windows Server 2008 R2 Service Pack 1 is needed for Security Updates

Keep in mind that some updates require the presence of a Service Pack or other major update.

  • So the first thing I’d check is WinVer.
  • If WinVer says you’re on Windows 2008 R2 version 7600 and doesn’t mention a Service Pack, you need to get to 7601 (Service Pack 1) pronto, and then start applying all the updates which have required SP1 – say, the last 4-5 years’ worth, which includes many Critical updates.
  • Windows 2008 should be at SP2. If it’s not at SP2, same thing applies as above.

This, again, is sadly not uncommon.


If You Found You Had Something Missing: Why Not Just Use Windows Update?

  • If you find they’re missing updates because { ¯\_(ツ)_/¯ }, my standard remediation suggestion is: just point them at public WindowsUpdate and specify your schedule. Let them pop out through a proxy, or go direct if they’re edge devices.

Yep. I’m serious. Better a security-sensitive device which is up to date by automatic patching at 3am on a Thursday than one which is out of date at all times by policy.

See also: Least Privilege Rule Set. If an attacker can’t hit the vulnerable port, you don’t have that problem.


Install the latest TMG Rollup Hotfix

Now, don’t misunderstand me: TMG isn’t the simplest thing in the universe to update (unlike its predecessor ISA Server, which was a positive dream by comparison). But if you’re reading this, you probably work in IT, so that’s not actually an excuse not to do it! 🙂

Yes, it’s a pain going from RTM to SP1 to SP1 + U1 to SP2 to SP2 Rollup 5, but… you should do it. You need to do it. If you’re one rollup behind, you’re actually 12-18 months of updates out of date. With hundreds of builds in between. Many issues have been fixed over the years, including hangs, crashes, and possibly a security update or two, if memory serves.

  • The latest rollup version I’m aware of is TMG Service Pack 2 with Update Rollup 5. If Help/About in the TMG MMC shows you a version earlier than 7.0.9193.644, well – that update was from 2014.
  • There’s one post-rollup hotfix I’ve seen (which is for SNI websites with HTTPS inspection enabled, but it provides a version bump to .650 for many core components too) which gets us to April 2015: .


Operating System Protection

Lifecycle and post-Lifecycle Firewalling

In April 2020, TMG exits Extended Support and is no more.

But by a quirk of the Support Lifecycle, Windows Server 2008 (and R2) actually exits Extended Support in January 2020, so a TMG box running down the clock will potentially be partially unprotected from an OS security updates perspective between January and April. (Unless a Custom Support Agreement is available, but it’s probably more costly than the alternative). So it’s not a terrible assumption that you’ve basically got until Dec 31, 2019 to get everything sorted out.

  • I don’t mind restating the obvious, so I will: You should have migrated away from TMG before the end of 2019. Please!
    • That’s still 3 years from now to plan and execute your migration
    • So if you haven’t already started, please add it to yourTo Do: 2017” list now.
  • If you do still have some TMG kicking around at that point, consider hardening the TMG Firewall policies (including the System policies) to limit all nonessential connectivity to the TMG hosts by any other computer.
    • In fact, think about doing that anyway, particularly if you actually had work items pending from the “Install Windows Updates” item above. Because that’s an attack surface exposure compounded with known vulnerabilities. That’s a poor combination for a security device.

If you’re planning to run beyond the end of support, don’t!

But if you do find yourself there: also think about defence in depth approaches. The sort you’d want to take with a Windows 2000 machine on your network if some business unit decided it needed to be added this year: isolate, put external firewalls in front of and behind it, so you seriously limit the ingress and egress paths available to it in case of compromise. Yes, TMG’s a firewall, but trusting {the actions of an on-box firewall which isn’t receiving security updates any more (in 2020)} on {an operating system which also isn’t receiving security updates any more} seems like it’s a bad bet compared to an external security device which is presumably still getting updates. Yah?



  • Just check the membership of any groups who have Admin permission to the box.
  • Then eliminate any local admins except one (if you don’t fully de-admin boxes), and remove any Domain groups you can.

Then, unless you’re sure (I mean certain, i.e. you’ve checked, not “I assume it’s quite unlikely”) that a) there’s only one local Admin account, and b) the password for that local Admin account is already unique and not known to anyone unauthorized, reset the remaining Admin password to a unique value (unless you’re already a LAPS shop, or use other password management tools… but please, check whether TMG’s part of the LAPS group, don’t just assume it is… that’s how SUS patching doesn’t work too!)


Basic Attack Surface Reduction

Most TMG boxes seem to have management agents for something or another installed on them. Actually, as a related observation, it’s not uncommon for me to find servers with multiple management agents for multiple generations of monitoring systems on them. Often disused ones. These are pure attack surface additions, and often running with privileged access levels. Very often with known vulnerabilities.

In short: Either kill ‘em, or at least make sure they can’t be contacted over the network (using Firewall policy).

  • If you have looked at them in the last 6 months, you can be excused from this item.
  • If not, check to see what the file dates of the EXEs are. If they’re over 3 years old, they’re probably a liability and almost certainly aren’t being updated, and simply represent an increased attack surface, so consider removing them.



Observe the exclusions needed for Antivirus when running on a TMG host. If you don’t exclude the right stuff, it can get a bit jammed up.


TMG Health


This one’s much less common than the above few.

  • Run RESMON for a short while, look at the Disk IO area, and sort by Bytes Total/sec. Note any files which have lots of IO over a 2 minute period.
    • (The idea is to try to minimize IO where possible)
  • If activity to ISALOG.BIN is chewing through a megabyte or more per second, TMG may be tracing something
    • or still tracing something – this has been seen when TMGBPA is used to run a diagnostic trace but for whatever reason it doesn’t terminate cleanly.
    • It might also indicate a diagnostic logging session is in progress (just check the console under Troubleshooting –> Diagnostic logging and hit Disable if it isn’t already disabled).

Note that in my experience, some minimal isalog.bin activity (say under 64K/sec) is normal.

If that’s the case, run the ISA Data Packager again, and open the tracing options, then untick everything.


Flood Mitigation

Going to say something a bit controversial here: You might want to experiment with turning off or massively increasing the defaults for flood prevention, particularly for outbound scenarios.

The defaults for this feature haven’t changed since it was introduced in 2004, but wow, Internet surfing patterns sure have.

So I say:

  • Try a 10X increase in the numbers, particularly for HTTP and TCP connections, and see how you go.
    • If it stops the constant alerting about “infected clients”, and you’ve got burnout from chasing them down only to find it was Bruce in Marketing opening eighteen instances of FireFox to their brand new multi-pronged CDN-driven site manually, it might be a welcome change, and reduce grumbling (nothing like a paused connection to cause a user to get grumpy about “the )(@$& Proxy”)…


And that, believe it or not, covers the most common TMG practices I’d suggest. Minimal TMG, maximal patching and defence in depth.

So there you have it. The most common Stuff I’ve seen over the years with TMG. Now go work out how you’re going to migrate egress to something else… (I assume Azure AD App Proxy will take care of the HTTP stuff, and/or Load Balancer Of The Year for the non-http bits…)

Hyper-V Synthetic Networking Is Much Faster

I decided it was time to look at upgrading my home broadband, mostly to get better-than-128KB upload speeds.

After the ISP-side change had eventually wound its way through, I was interested to see that while my upload speeds had improved, my download speeds still seemed capped at about 25Mbits/s (rule of thumb: divide by 8 to get megabytes per sec, so about 3MBytes/sec), despite a new modem and speeds that should’ve been “up to 100Mbit/sec”.

Before launching a tirade at the ISP, I re-plugged my laptop directly into the cable modem, and got around 120MBit/sec test results from, so knew it wasn’t the ISP-side plan itself rate-limiting me any more. And thus began the investigation.

A packet walks into a bar

My setup looks basically like this:

(regular internal networks on 10.x subnets) -> [ TMG -> DMZ -> Smoothwall ->] Cable

With a “side path” for the Xbox, which goes (10.x) -> Smoothwall -> Cable, to benefit from the UPNP mapping feature of Smoothwall.

Technically, any UPnP device or app can use the same side path, but “regular” browsing by WPAD-enabled stuff can benefit from TMG’s web proxy cache to some extent.

Any testing I performed through Smoothwall led me back to the 25MBit capped rate. I checked QoS was off, which it was, and then decided I’d eliminate Smoothwall and try pushing the laptop through TMG directly (bypassing the Smoothwall) as a test.

The test gave me ~115MBit/sec+. Good enough!

Leaping to conclusions

The complicating factor is that everything to the right of the internal networks above is virtualized on my WS2012 R2 Hyper-V box, and I guessed that the rate limiting might have something to do with the legacy network adapters in the Smoothwall VM (one each for internal and external, plus a perimeter NIC for fun). My TMG VM uses synthetic NICs only, so I assumed that it was probably the key performance improvement when using it. The CPU on the router VM didn’t seem to be breaking 0% (externally; 4% is 100% of one core), so I guessed that was a sign the cause might’ve been semi-external, or in the hypervisor.

Rather than work out what performance counters I could use to prove this, I figured I’d try to get the legacy NICs upgraded to the new Hyper-V synthetic NICs instead.

And after reading the vast array of “here’s how you compile XYZ into your kernel” articles on it, I figured that it sounded way too hard (especially as I was using a now-legacy copy of SmoothWall Express 3.0) and finding some kind of distro which worked with Hyper-V enlightenments was probably the easiest route! No pun intended.

How do you test that? Easy, you just leave a VM at default settings – which include a new NIC – and see if it can detect the adapter. If it can, great! If it can’t, try another!

My Little Firewall (packets are magic)

First, I did a little research as to what might support Hyper-V natively, and found pfSense 2.2, based on FreeBSD 10.2, which has inbuilt Hyper-V integration services. I installed it, it found the new NICs, and after minimal faffing around, I had a working firewall, which speed tested up around 115-120MBit/sec. Boosh!

pfSense looks really capable and has lots of new options to explore, but I wondered whether upgrading to SmoothWall 3.1 (which I’d been putting off) would have yielded similar results? I don’t have a huge rule set to migrate, so I started building a fresh Smoothwall Express 3.1 VM from the 200MB ISO, and after configuring my networks from scratch, tried that… also a nice, fast result with synthetic NICs.

So I now have a choice of external firewalls (and whether to bridge the cable modem again, which I like: fewer moving parts to manage), with speedy integrated networking components, and I’m generally quite happy that non-Windows distros seem to be integrating the Hyper-V components for faster-than-legacy networking!

[Update and aside 2017-03-03 – I eventually added a scooter computer to the lineup as an always-on low-power host, and put PfSense onto it “bare metal”. But it quickly became apparent that the Realtek NIC driver included in the box was pretty suboptimal, and would drop out under mild load. So I’m now running Server 2016 on the tiny host, and PfSense in a VM on it, and it’s fast and reliable using the Windows Realtek NIC driver virtualized through Hyper-V. Ironic fun!]

App Pool Recycling Defaults: Why 1740 minutes?

Without doubt, one of the most FAQ when discussing Application Pools in IIS Admin and Troubleshooting workshops!

Scott Forsyth shares Wade’s answer

In Wade’s words: “you don’t get a resonate pattern”.

then follows up with useful advice on establishing your own best recycling interval:

First off, I think 29 hours is a good default. For a situation where you don’t know the environment, which is the case for a default setting, having a non-resonate pattern greater than one day is a good idea.

However, since you likely know your environment, it’s best to change this.

Lots of useful information and commentary in a post that’s sure to become a genre classic. Really. No sarcasmo.

IIS 7: But why do I get a 500.19 – Cannot add duplicate collection entry- with 0x800700b7 !?

(Because I’m sure that was your exact exclamation when you hit it!)

Also applies to IIS 7.5 (Windows Server 2008 R2), IIS 8.0 (Windows Server 2012), IIS 8.5 (Windows Server 2012 R2) and IIS 10 (Windows Server 2016).

The Background

This week, I was out at a customer site performing an IIS Health Check, and got pulled into a side meeting to look at an app problem on a different box.

The customer was migrating a bunch of apps from older servers onto some shiny new IIS 7.5 servers, and had hit a snag while testing one of these with their new Windows 7 build.

To work around that, they were going to use IE in compatibility mode (via X-UA-Compatible), but adding HTTP response headers caused the app to fail completely and instantly with a 500.19 configuration error.

We tested with a different header (“X-UA-Preposterous”) and it had the same problem, so we know it’s not the header itself.

“Now that’s interesting!”

At first I thought it was app failure, but as it turns out…

The Site Layout

This becomes important – remember I noted that the app was being migrated from an old box to a new one?

Well, on the old box, it was probably one app of many. But the new model is one app per site, so a site was being created for each app.

The old location for the app was (say) http://oldserver/myapp/, but the contents were copied to the root of http://newsite/ on the new server.

To allow the app to run without modification to all its paths, a virtual directory was created for /myapp/ (to mimic the old path) which also pointed to the root of newsite.


So myApp above points to c:\inetpub\wwwroot , and so does Default Web Site.

Setting up the problem

So, using the GUI, I set the X-UA-Compatible header to IE=EmulateIE7. The GUI wrote this to the web.config file, as you can see in the status bar at the bottom:


Browsing to a file in the root of the website works absolutely fine. No problem at all.

Browsing to anything in the /myApp/ vdir, though, is instantly fatal:


If you try to open HTTP Response Headers in the /myApp/ virtual directory, you also get a configuration error:


What does that tell us? It tells us that the configuration isn’t valid… interesting… because it’s trying to add a unique key for X-UA-Compatible twice.

Why twice? Where’s it set? We’re at the site level, so we checked the Server level HTTP Response Headers… blank.

But… it’s set in a web.config file, as we can see above. And the web.config file is in the same location as the path.

Lightbulb moment

Ah. So we’re probably processing the same web.config twice, once for each segment of the url!

So, when the user requests something in the root of the site, like http://website/something.asp:

1. IIS (well, the W3WP configuration reader) opens the site’s web.config file, and creates a customheaders collection with a key of X-UA-Compatible

2. IIS Serves the file

And it works. But when the user requests something from the virtual directory as well – like http://website/myApp/something.asp

1. IIS opens the site web.config file, and creates a customheaders collection with a key of X-UA-Compatible

2. IIS opens the virtual directory web.config file (which is the same web.config file) and tries to create the key again, but can’t, because it’s not unique

3. IIS can’t logically resolve the configuration, so responds with an error

Options for Fixing It

1. Don’t use a virtual directory

(or rather, don’t point the virtual directory to the root of the website)

This problem exclusively affects a “looped” configuration file, so if you move the site contents into a physical directory in that path, it’ll just work.

There will be one configuration file per level, the GUI won’t get confused, and nor will the configuration system.

Then you just use a redirecting default.asp or redirect rules to bounce requests from the root to /myApp/ .

2. Clear the collection

You can add a <clear /> element to the web.config file, and that’ll fix it for any individual collection, as shown here:

      <clear />
<add name=”X-UA-Compatible” value=”IE=EmulateIE7″ />

The clear element tells IIS to forget what it thinks it knows already, and just go with it. (When you break inheritance of a collection in the GUI, this is what it does under the covers).

The problem with this approach is that you need to do it manually, and you need to do it for every collection.

In our case, we had Failed Request Tracing rules as well which failed with the same type of error, promptly after fixing the above problem, confirming the diagnosis.

3. Move the configuration

And this splits into two possible approaches:

3a. Editing Applicationhost.config by hand

You can remove the web.config and use a <location path=”New Site/myApp”> tag in applicationhost.config to store configuration, and that’ll work until someone uses web.config again.

3b. Using Feature Delegation

If you do want to prevent web.config being used, you can use the Feature Delegation option to make all properties you don’t want people to set in web.config files Read-Only. (aka “lock” those sections). “Not Delegated” would also work.


This can be done per-Site using Custom Site Delegation, or globally.

And! This has the added happy side-benefit of making the GUI target applicationhost.config, rather than creating or modifying web.config files for that site.


Hope that helps you if you hit it…

Configuring Kerberos for SharePoint farms – a generic gotchas list

Recently, I worked on a Kerberos configuration issue with a customer; these are my notes from the visit.

You’ll see some common themes with Kerbie Goes Bananas, and it puts much of that into practice. Speaking of, I must redo Kerbie with SetSPN -S  (shameface)


1. DNS should use an A record to refer to the load balancing IP, not a CNAME

This configuration step avoids an Internet Explorer behaviour whereby IE resolves a CNAME into an A record, and requests a ticket by building an SPN for the A record, instead of the CNAME.

More information is available at . In most cases, adjusting the behaviour of Internet Explorer across all machines is harder than adjusting the DNS entry involved.

2. SPNs must be registered against the Application Pool Account

Note: use the Windows 2008 (or later) version of SetSPN to identify problems such as duplicates when updating SPNs. Any existing document using SETSPN -A should be updated to use SETSPN -S.

Only two SPNs are required for Kerberos to function against a farm – the FQDN, and the short hostname.

These must be applied to the account used by the Application Pool receiving the user request, which practically means that in most cases, only one account is usable per hostname (pair).

SPNs to be registered are:


Against the user identity of the Application Pool the user is connecting to – say, DOMAIN\SPAccount. This must be a domain account when used in a Farm scenario.

Note that no port number is used for the default port, and that these SPNs are also used for TLS/SSL.


If the individual hostname is to be used occasionally (e.g. for troubleshooting), http/machinename and http/machineFQDN should be registered against that account as well.

This should result in a list of SPNs as shown:

setspn -l DOMAIN\SPAccount

Registered ServicePrincipalNames for CN=SharePoint App Pool Account,OU=Service Accounts,DC=example,DC=com:



3. The App Pool Account must be used for authentication

In a web farm scenario, a domain account must be used as the application pool identity. Once a suitable domain account is configured as the application pool identity (DOMAIN\SPAccount in this example), Kernel-Mode Authentication must be disabled, or the configuration’s useAppPoolCredentials property must be set to true (both may be used).

If this step is not performed, the app pool will not be able to decrypt the Kerberos ticket supplied by the client.

To disable Kernel-mode Authentication

Open InetMgr (IIS Manager), browse to Authentication for the site, click Windows Authentication and open Advanced Settings (Actions pane on the right), and untick “Use Kernel-mode Authentication”.


To set useAppPoolCredentials to true:

Open a CMD window as Administrator, then:

CD %windir%\system32\inetsrv

appcmd.exe set config -section:system.webServer/security/authentication/windowsAuthentication -useAppPoolCredentials:true

Note: one line (wrapped), with no space after any dash (-) character.


4. Performance – Kerberos and NTLM

Use of Kerberos should significantly reduce traffic between WFEs and Domain Controllers.

Every NTLM-authenticated connection requires the server to make a connection to a DC to complete authentication. The number of connections available to a DC simultaneously is governed by MaxConcurrentApi registry value.

Kerberos allows the client to authenticate to a DC once for the website, and to continue to use the ticket for the ticket lifetime (10 hours by default), across multiple connections, without necessarily needing to interact with the DC again.


MaxConcurrentApi (original article) (now supports 150)

Kerberos vs NTLM authentication with ISA Server (same concepts apply with Sharepoint or any Web app)

And a third-party performance comparison of Kerb and NTLM authentication with kernel-mode authentication and without was found here (not overall site performance, just basic RPS).

PSA: You really need to update your Kerberos setup documentation with SetSPN -S!


You might remember me from such posts as Kerbie Goes Bananas, and SetSPN improvements for Windows 2008. Or something.

I’m here with a public service announcement! Excitement!

It’s been long enough since Windows 2008 (and the downlevel release of SetSPN) that I feel comfortable respectfully asking you to please:

Search and Replace SetSPN -A with SetSPN -S.

In your organization, if you ever happen to run across a document that describes a procedure that looks anything like this:

SetSPN -A http/yourwebfarm DOMAIN\YourFarmAccount


  •  mail the author, or
  •  file a bug against the content, or
  •  use the Community Content feature if it’s somewhere on Technet, or
  •  mail anyone and everyone responsible for upkeep or implementation of that document

to change the SETSPN -A command to a SETSPN -S.

You may need to include a foreword describing where to get the 2008 version of SetSPN (I think I may have just spoiled it for you) if you’re still strongly a 2003/XP shop, with no newer SetSPN-toting OSs available.

Why the change?

Because it’ll hurt you less in the long run.

The original release of SetSPN was strongly account-centric. Given a Windows account, it would let you:

  • Add an SPN to that account
  • Remove an SPN from that account
  • List the SPNs associated with that account

Unfortunately, this makes it very easy to add the same SPN to multiple accounts – creating a duplicate SPN. This is a very bad thing.

The same SPN can’t easily be added more than once to the same user account, but the original tool does nothing to prevent the same SPN being added to multiple user accounts – and unfortunately, that’s exactly the situation you’re trying to avoid.


Any of

  • SETSPN -A http/farm DOMAIN\FarmUser
  • SETSPN -A http/farm DOMAIN\FarmComputer$


  • SETSPN -A http/farm DOMAIN\FarmComputer1$
  • SETSPN -A http/farm DOMAIN\FarmComputer2$


  • SETSPN -A http/farm ANYTHING followed by

breaks kerberos for http://farm.

To restate the rule: One SPN can be associated with precisely one account.

So please, use SetSPN -S

And that’s exactly what SETSPN -S is designed to prevent. SETSPN -S performs a quick check for duplicates before adding an SPN – which is the best possible time at which to catch the problem. So yay-the-Windows-2008-AD-team.

Duplicates! Gotta Catch ‘Em All 2011 Edition

If you suspect you have duplicate SPNs in your environment, well, why just suspect? Run


To be told explicitly what duplicates you have kicking around in AD (there are forestwide switches you can use too). Yep, that used to be a nasty LDIFDE export with an LDAP filter expression; much simpler now!


DebugDiag 1.2 (64-bit capable, .Net 2.0+ compatible) released

Great news, everybody! Wait, that was Farnsworth-y – no, really, it’s great news!

DebugDiag 1.2 (or to give it its full title, the Debug Diagnostic Toolkit) has been released to the web and is available from the Microsoft Download Center.

Notes from the email that described this release:


· .Net 2.0 and higher analysis integrated to the Crash Hang analysis.

· SharePoint Analysis Script.

· Performance Analysis Script.

· .NET memory analysis script (beta).

· Native heap analysis for all supported operating systems


· Generate series of Userdumps.

· Performance Rule.

· IIS ETW hang detection. 

· .NET CLR 4.0 support.

· Managed Breakpoint Support.

· Report Userdump generation to the Event log.


· Import/Export of rules and configuration, including ‘Direct Push’ to remote servers.

· Enterprise deployment support using XCopy and Register.bat.

Non-supported items

· x64 userdump analysis on x86 systems.

· Installing x86 DebugDiag on x64 systems.

· Installing 1.2 and 1.1 DebugDiag on the same system.

· 1.2 Memory leak analysis of 1.1 leaktrack.

· Analysis of x86 Userdumps generated by x64 debugger.

Notes about this release:

– Uninstall all previous DebugDiag versions before you install DebugDiag 1.2.

DebugDiag makes analysis of IIS (and sometimes random process) user mode memory dumps a cinch. This long-awaited release should be helpful to any IIS admin or developer trying to troubleshoot problems with their web site or server.

Windows Live Sync Beta “Sorry” Errors

I’m in the process of updating my Mesh-and-Sync empire to the new Sync.

On all of the Server operating systems (all 2008 R2), I’d install the new Live Sync Beta, and then find that they reported something along the lines of:

“Sorry, there is a problem with the sync servers. Please try again in a few minutes”

…Only I waited, and it never got any better. On the second server where I experienced this problem, I figured there must have been something wrong at the client end.

My TMG proxy logs showed multiple attempts to connect to before the failure (all successful, but doing it three times suggests a failure condition to me).

Figuring it was something credential-related, I browsed to Hotmail and signed in with my current passport (Windows Live ID – I’d undergone an account merge/migration a while back to switch email addresses), and noticed my old WLID email address was offered on all these machines.

Once I’d signed into Hotmail with my current credentials, the error went away.

So: local cached credentials broken; using the browser seems to have fixed it on all three servers where I had this problem.

So far: Love the new Remote Desktop features, which are an improvement over Mesh, but I miss the activity view from the old Foldershare-based Sync.

Telstra HTC HD2 Connection Settings

The Shiny New Phone

I bought an HTC HD2 sight-unseen from the Telstra online store yesterday, and the “five business days” for delivery worked out to be one in total. Score!

With the phone as supplied, aspects of the phone are uncustomizable, and the Internet connection is one of those things.

Most Internet-connecting items worked fine – browsing the web, Twitter, etc – but ActiveSync kept complaining about my connection settings. I verified passwords, server settings, but nothing seemed to work.

The Fix

A tip from a colleague fixed it:

Go through the ActiveSync Wizard to add your mail information. On the very last page, there is an Advanced button. Click that baby and you’ll find a connection drop down. Select the Internet or Telstra.Internet connection.

And now, I have mail through Exchange again. W00p!

Now, to work out how to uncustomize the HTC Sense app a bit.

On the ISA Server Security Update

Rambling my way to a point

One of my most favourite “Favorites” (read: “he snarled”) in recent weeks has been the ISA Server Product Team’s Build Numbers post.

They helpfully list the version numbers of each ISA Server, um, version, along with a link to the most recent hotfix for that version. That’s so helpful.

But: In most cases, you had to use the self-service hotfix feature to get that hotfix. Which is better than calling someone, but still not quite one-click conweenyence.

And there was some useful stuff fixed in each – you can do the research (hint: research is typically along the lines of “isa server hotfix” in whatever search engine you use).

Back to the security update: if you look at the file list for the security updates, they look a lot like the file lists for the recent hotfixes.

(Aside from a little while ago: nice that we’re again using KB articles for file information and not just “you should read the bulletin” placeholders. Makes it easier to reliably find file version information in the one place. No idea who changed it in the first place, but my blunt message to you: that was suboptimal.)

I know you love short versions, Glenda

So, long story short, by applying the security update, you’re getting the most recent build of those binaries for your ISA Server.

Just one caveat: remember that with this patch, you’ll need to reapply it if you make any significant installation-level changes to ISA later (see the bulletin for that).