Website Security Suggestion: Get rid of cruft! (script included)

Right: One of my pet hates is cruft on a production website.

Cruft is stuff – files – which has accumulated because nobody’s paying attention. Cruft includes sampleware. Developer experiments. Readmes. Sample configs. Backups of files which never get cleaned up. Just general accumulated stuff. It’s website navel lint. Hypertext hairballs.

Cruft. Has. No. Place. On. A. Production. Website!

Worst-case, it might actually expose security-sensitive information. (That’s the worst type of cruft!).

Want to find cruft? Well, easiest way to start is:

D:\WebContent> dir /s *.txt

That’s a good start. For every Readme.txt, add 10 points. For every web.config.txt, add 1000 points (why? That’s a potentially huge problem – .config is blocked by Request Filtering by default (with certain exceptions), but .config.txt: no problem! Download away.)

If you score more than 10 points, you need to rethink your strategy.

  • There is no reason for files like readme.txt to exist within your production website
    • Okay, there’s one reason and that’s when you’re providing one you know about, and have vetted, for download.
      • I mean, obviously if the site is there to provide readme.txt s for apps people are downloading, great! But if it’s the readme for some developer library which has been included wholesale, bad pussycat.
  • There is no reason for files like web.config.bak to exist within your production website.
    • Luckily, .bak files aren’t servable with the default StaticFileHandler behaviour. But that doesn’t mean an app (or * scriptmap…) can’t be convinced to hand you one…
  • If you have web.config.bak.txt files, you’re asking for trouble.
    • Change your operational process. Don’t risk leaking usernames and passwords this way.

The Core Rationale

Web developers and site designers should be able to explain the presence of every single file on your website.

I don’t care if it’s IIS or Apache or nginx or SuperCoolNewTechnologyX… the developers should be responsible for every single file deployed to production.

And before the admins (Hi!) get smug and self-satisfied (you still can, you just need to check you’re not doing the next thing…), just check that when you deploy new versions of Site X, you’re not backing up the last version of Site X to a servable content area within the new version of Site X.

For example, your content is in F:\Websites\CoolNewSite\ with the website pointed to that location…

  • It’s safe to back up to F:\Backups\CoolNewSite\2016-11-13 because it’s outside the servable website
  • It’s not cool to back up to F:\Websites\CoolNewSite\2016-11-13 because that’s part of the website.

How Do I Know If I’m Crufty?

As I do, I started typing this rant a while ago, and then thought: You know what? I should script that!

I had a bunch of DIR commands I was using, and sure, could’ve just made a CMD, but who does that these days? (Says my friend. (Singular))

Then {stuff}… but it finally bubbled to the top of my to-do list… So I wrote a first draft Get-CruftyWebFiles script.

I’ve lots of enhancement ideas from here, but wanted to get something which basically worked. I think this basically works!

Sure, there’s potential duplication if sites and apps overlap (i.e. the same file might be listed repeatedly) (which is fine; I figure you weed that out in post production), and if your site is self-referential it might get caught in a loop (hit Ctrl+C if you think/know that’s you, and *stop doing that*)

So, feel free if you want to see how crufty your IIS 7.5+ (assumed? Tested on 8.5) sites are:

The Script:

Usage (roughly):

Copy to target web server. Then from an Admin PS prompt:

  • .\Get-CruftyWebFiles.ps1   # scans all web content folders linked from Sites, and outputs to .\crufty.csv
  • .\Get-CruftyWebFiles.ps1 -WebSiteName “Default Web Site”     # limits to just the one website.
  • .\Get-CruftyWebFiles.ps1 -DomainName “YOURDOMAIN”    # checks for that text string used in txt / xml files as well

Pull the CSV into Excel, Format as Table, and get sorting and filtering. Severity works on a lower-is-more-critical basis. Look at anything with a zero first.

Todo: Cruft Scoring (severity’s already in there), more detections/words, general fit and finish. Also considering building a cruft module for a security scanner, or just for the script, to check what’s findable on a website given some knowledge of the structure.

* oh! No I’m not

Sunsetting TMG 2010 with some (free!) Best Practices

Long and boring post ahead. So: KITTENS! There. Fluffy now.

As one of the Premier Field Engineers performing ISA Server Health Checks and then Threat Management Gateway (TMG) configuration reviews (by default, from my long association with Proxy 2.0 and then ISA), I was reviewing a document I put together for a customer just before shredding it, and thought:

You know what? Everyone should do these things! These recommendations are common enough that I seem to make them every time I see a TMG box… so why not generalize and recommend them here? Put them out into the wild. Get them shouted down. Give them their time in the sun.

So on the off-chance you’re a survivor of the TMG Survival Guide and you’re looking for some last-minute as-seen-in-the-real-world TMG corrective advice – and by “last minute”, I mean:

  • You know the base product is in Extended Support until 2020, then it’s going away. (sniff!)
  • You understand that Malware Scanning and Network Inspection System are already frozen at their last update level.
  • You know URL Categorization (Filtering) got turned off already so any rules using it might fail-open (or fail-closed)…

And in terms of pre-migration work

  • You’ve also been through your rule set, and tested that everything’s Least Privilege-compliant,
    • i.e. No broad “everyone can access anything/TMG/anywhere with any protocol” rules or anything like that.
      • No really, if you can connect to TMG via SMB, that’s usually not a good sign… You’re at least using Windows Update for patches, though, right?
  • Maybe you’ve performed an ISAINFO (and/or TMGBPA) export of your rule set so that you can ease the process of recreating them on the next egress device you pick? 🙂

…Because these are all fantastic first steps on the long migration path between proxies. If you haven’t done them, do put them on the list.

So before you shut down TMG that final time, and repurpose the boxes for Quake servers (or whatever you kids use spare boxes for these days)…

What best practices are available to you do in the meantime? Glad you asked!

Here’s the short list, the detail follows.

Proactively Protect The Box

  • Install the latest Windows Updates
  • Install the latest TMG Rollup Hotfix (SP2 UR5, potentially + .650 or later)
  • (Install any updates for any other software on the box)

Operating System Protection

  • Firewalling
  • De-Adminning
  • Attack Surface Reduction
  • AV exclusions

TMG Health and Perf

  • Check Tracing isn’t enabled
  • Disable/Relax Flood Prevention

And now the details…

Proactively Protect The Box

“It’s a firewall, it doesn’t need patching!(just for clarity: that’s not true)

Install the latest Windows Updates

  • If you’re not installing Windows updates, um, I don’t know what to tell you?

You understand that unpatched vulnerabilities win over security settings, permissions and antivirus, right? Any on-box control is potentially circumvent-able by an unpatched (bad) vuln?

And you’re still thinking it’s optional? Well! That’s nice! I hope you’ve a mitigation strategy in place, and an incident response plan for when that one fails.

TMG defends itself pretty heavily against network attack ( (a: by default) (b:to an extent; it still leverages OS components for certain chunks of functionality)), but lots of people end up creating rules which – paraphrased – allow the Internal network to hit any port on the TMG computer. Because reasons!

This is the same pathology which leads people to not patch their CAs, or not to use firewalling between hosts on their internal network – it’s the opposite of a defence in depth approach!

Anyway, back to updates:

  • When I check the update state of a box, I do so by running MBSACLI (the command-line version of MBSA) using the current WindowsUpdate CAB if the box doesn’t have Internet connectivity.

mbsacli /xmlout /nvc /nd /wi /catalog .\ /unicode > %computername%-MBSA.xml

    • I actively avoid using the default customer WSUS catalog, because it’s completely possible to be 100% compliant with the WSUS approval policy and have unapproved updates missing from five years ago, which were skipped for a good reason, but then that decision was never revisited.
  • It is uncommon in my experience to find that servers are up to date. For a security appliance at the edge of the network, used as an ingress or egress point by thousands of clients, this is suboptimal.


Windows Server 2008 R2 Service Pack 1 is needed for Security Updates

Keep in mind that some updates require the presence of a Service Pack or other major update.

  • So the first thing I’d check is WinVer.
  • If WinVer says you’re on Windows 2008 R2 version 7600 and doesn’t mention a Service Pack, you need to get to 7601 (Service Pack 1) pronto, and then start applying all the updates which have required SP1 – say, the last 4-5 years’ worth, which includes many Critical updates.
  • Windows 2008 should be at SP2. If it’s not at SP2, same thing applies as above.

This, again, is sadly not uncommon.


If You Found You Had Something Missing: Why Not Just Use Windows Update?

  • If you find they’re missing updates because { ¯\_(ツ)_/¯ }, my standard remediation suggestion is: just point them at public WindowsUpdate and specify your schedule. Let them pop out through a proxy, or go direct if they’re edge devices.

Yep. I’m serious. Better a security-sensitive device which is up to date by automatic patching at 3am on a Thursday than one which is out of date at all times by policy.

See also: Least Privilege Rule Set. If an attacker can’t hit the vulnerable port, you don’t have that problem.


Install the latest TMG Rollup Hotfix

Now, don’t misunderstand me: TMG isn’t the simplest thing in the universe to update (unlike its predecessor ISA Server, which was a positive dream by comparison). But if you’re reading this, you probably work in IT, so that’s not actually an excuse not to do it! 🙂

Yes, it’s a pain going from RTM to SP1 to SP1 + U1 to SP2 to SP2 Rollup 5, but… you should do it. You need to do it. If you’re one rollup behind, you’re actually 12-18 months of updates out of date. With hundreds of builds in between. Many issues have been fixed over the years, including hangs, crashes, and possibly a security update or two, if memory serves.

  • The latest rollup version I’m aware of is TMG Service Pack 2 with Update Rollup 5. If Help/About in the TMG MMC shows you a version earlier than 7.0.9193.644, well – that update was from 2014.
  • There’s one post-rollup hotfix I’ve seen (which is for SNI websites with HTTPS inspection enabled, but it provides a version bump to .650 for many core components too) which gets us to April 2015: .


Operating System Protection

Lifecycle and post-Lifecycle Firewalling

In April 2020, TMG exits Extended Support and is no more.

But by a quirk of the Support Lifecycle, Windows Server 2008 (and R2) actually exits Extended Support in January 2020, so a TMG box running down the clock will potentially be partially unprotected from an OS security updates perspective between January and April. (Unless a Custom Support Agreement is available, but it’s probably more costly than the alternative). So it’s not a terrible assumption that you’ve basically got until Dec 31, 2019 to get everything sorted out.

  • I don’t mind restating the obvious, so I will: You should have migrated away from TMG before the end of 2019. Please!
    • That’s still 3 years from now to plan and execute your migration
    • So if you haven’t already started, please add it to yourTo Do: 2017” list now.
  • If you do still have some TMG kicking around at that point, consider hardening the TMG Firewall policies (including the System policies) to limit all nonessential connectivity to the TMG hosts by any other computer.
    • In fact, think about doing that anyway, particularly if you actually had work items pending from the “Install Windows Updates” item above. Because that’s an attack surface exposure compounded with known vulnerabilities. That’s a poor combination for a security device.

If you’re planning to run beyond the end of support, don’t!

But if you do find yourself there: also think about defence in depth approaches. The sort you’d want to take with a Windows 2000 machine on your network if some business unit decided it needed to be added this year: isolate, put external firewalls in front of and behind it, so you seriously limit the ingress and egress paths available to it in case of compromise. Yes, TMG’s a firewall, but trusting {the actions of an on-box firewall which isn’t receiving security updates any more (in 2020)} on {an operating system which also isn’t receiving security updates any more} seems like it’s a bad bet compared to an external security device which is presumably still getting updates. Yah?



  • Just check the membership of any groups who have Admin permission to the box.
  • Then eliminate any local admins except one (if you don’t fully de-admin boxes), and remove any Domain groups you can.

Then, unless you’re sure (I mean certain, i.e. you’ve checked, not “I assume it’s quite unlikely”) that a) there’s only one local Admin account, and b) the password for that local Admin account is already unique and not known to anyone unauthorized, reset the remaining Admin password to a unique value (unless you’re already a LAPS shop, or use other password management tools… but please, check whether TMG’s part of the LAPS group, don’t just assume it is… that’s how SUS patching doesn’t work too!)


Basic Attack Surface Reduction

Most TMG boxes seem to have management agents for something or another installed on them. Actually, as a related observation, it’s not uncommon for me to find servers with multiple management agents for multiple generations of monitoring systems on them. Often disused ones. These are pure attack surface additions, and often running with privileged access levels. Very often with known vulnerabilities.

In short: Either kill ‘em, or at least make sure they can’t be contacted over the network (using Firewall policy).

  • If you have looked at them in the last 6 months, you can be excused from this item.
  • If not, check to see what the file dates of the EXEs are. If they’re over 3 years old, they’re probably a liability and almost certainly aren’t being updated, and simply represent an increased attack surface, so consider removing them.



Observe the exclusions needed for Antivirus when running on a TMG host. If you don’t exclude the right stuff, it can get a bit jammed up.


TMG Health


This one’s much less common than the above few.

  • Run RESMON for a short while, look at the Disk IO area, and sort by Bytes Total/sec. Note any files which have lots of IO over a 2 minute period.
    • (The idea is to try to minimize IO where possible)
  • If activity to ISALOG.BIN is chewing through a megabyte or more per second, TMG may be tracing something
    • or still tracing something – this has been seen when TMGBPA is used to run a diagnostic trace but for whatever reason it doesn’t terminate cleanly.
    • It might also indicate a diagnostic logging session is in progress (just check the console under Troubleshooting –> Diagnostic logging and hit Disable if it isn’t already disabled).

Note that in my experience, some minimal isalog.bin activity (say under 64K/sec) is normal.

If that’s the case, run the ISA Data Packager again, and open the tracing options, then untick everything.


Flood Mitigation

Going to say something a bit controversial here: You might want to experiment with turning off or massively increasing the defaults for flood prevention, particularly for outbound scenarios.

The defaults for this feature haven’t changed since it was introduced in 2004, but wow, Internet surfing patterns sure have.

So I say:

  • Try a 10X increase in the numbers, particularly for HTTP and TCP connections, and see how you go.
    • If it stops the constant alerting about “infected clients”, and you’ve got burnout from chasing them down only to find it was Bruce in Marketing opening eighteen instances of FireFox to their brand new multi-pronged CDN-driven site manually, it might be a welcome change, and reduce grumbling (nothing like a paused connection to cause a user to get grumpy about “the )(@$& Proxy”)…


And that, believe it or not, covers the most common TMG practices I’d suggest. Minimal TMG, maximal patching and defence in depth.

So there you have it. The most common Stuff I’ve seen over the years with TMG. Now go work out how you’re going to migrate egress to something else… (I assume Azure AD App Proxy will take care of the HTTP stuff, and/or Load Balancer Of The Year for the non-http bits…)

TMG Rollup 3 out now; so’s Mod_Security for IIS

TMG SP2 Update Rollup 3

As the ISA Blog mentions, Rollup 3 for TMG Service Pack 2 is now available:

We are happy to announce the availability of Rollup 3 for Forefront Threat Management Gateway (TMG) 2010 Service Pack 2 (SP2). TMG SP2 Rollup 3 is available for download here: Rollup 3 for Forefront Threat Management Gateway (TMG) 2010 Service Pack 2

Please see KB Article ID: 2735208 for details of the fixes included in this rollup.

The Build Number for this update is: 7.0.9193.575

Fair number of new fixes included and it looks like a worthwhile update. I’m putting it on my home TMG box tonight. As a reminder, the hotfix rollups are cumulative for a given Service Pack, so if you’re already at Service Pack 2 (and you should be) you just need SP2UR3 if you skipped UR1 or UR2.

Mod_Security for IIS

In other security-related news, mod_security for IIS hit a stable release at 2.7.2, as the SRD blog notes:

We are pleased to announce the release of a stable version of the open source web application firewall module ModSecurity IIS 2.7.2. Since the announcement of availability of the beta version in July 2012, we have been working very hard to bring the quality of the module to meet the enterprise class product requirements. In addition to numerous reliability improvements, we have introduced following changes since the first beta version was released:

  • optimized performance of request and response body handling
  • added “Include” directive, relative path and wildcard options to the configuration files
  • re-written installer code to avoid .NET Framework dependency and added installation error messages to system event log
  • integrated OWASP Core Rule Set in the MSI installer with IIS-specific configuration
  • fixed about 10 functional bugs reported by ModSecurity IIS users.

Microsoft also released recently a TechNet article entitled "Security Best Practices to Protect Internet Facing Web Servers", which explains in details benefits of deploying a WAF module on a web server.

The Technet article referenced above is worth a read if you’re charged with delivering IIS web server security for random applications!


Where’s Waldo?

I’m spending more time editing the MSPFE blog than here at the moment, so if you’re missing my quippy, irreverent style… tough! (But I still love you. Happy Valentine’s day! (No gifts for you this year.))

Is it time for you to reset your online identity?

Lots of account hacking activity in the news recently. The Blizzard hack (via RPS) caught my eye because of some of the wording used to describe it:

“Some data was illegally accessed, including a list of email addresses for global users, outside of China. For players on North American servers (which generally includes players from North America, Latin America, Australia, New Zealand, and Southeast Asia) the answer to the personal security question, and information relating to Mobile and Dial-In Authenticators were also accessed. Based on what we currently know, this information alone is NOT enough for anyone to gain access to accounts.”

Now, I’ve trained my parents never to use the same password on any websites connected with billing information. That’s a no-brainer.

But I’ve always lied on those secondary verifiers because it just seemed like I should. It’s intuitive to me that I’d want to have different verifiers for each website *despite* them offering the same set of questions.

But I wonder if others are as careful? The recent publicized Apple/Amazon combo hack suggests that some combinations might be unavoidable, but that doesn’t mean you can’t take other precautions.

Have you used the same “mother’s maiden name” verification information across websites? Could the compromise of information you supplied to a “throwaway” website lead to compromise of a really important one?

If so, it might be time to go through all the websites you use most frequently, and change the information there. Yes, all of it. Then write down your new lies somewhere you can find them.

Secrets should be shared between you and each website – not between you and every website.

Because until we get to an identity metasystem, where every single website doesn’t rely on independently re-verifying every single detail about your life, anything you share with any website may eventually become public information.

Scary thought.

IUSR vs Application Pool Identity – Why use either?

(pasted from my email clippings. I’m on holiday right now, catching up on paperwork!)

The TLDR version is: using AppPoolIdentity as both the App Pool Account and Anonymous user account lets you have multiple isolated anonymous websites on one box.

IIS 7.x and upwards (as of Win2008 R2 and Windows 2008 SP2, also in IIS 8.x in Windows Server 2012 and IIS 10.x in Windows Server 2016) supports a new Application Pool account type, called an ApplicationPoolIdentity. This low-privileged account can be used to isolate distinct sets of anonymous website content, without requiring the administrator to set up a unique account for each website manually.

So whereas the default IUSR anonymous account is per-server, an ApplicationPoolIdentity is per-app-pool, and IIS creates one app pool per site, by default when the GUI is used to create a site.

So by setting the ApplicationPoolIdentity as the anonymous user account for a site, you can isolate content and configuration for that site so that no other sites on the same box can access it, even if it’s an anonymous site.

And now, the long version!


Before I start: Terminology disambiguation corner (because App Pool Identity is a horribly overloaded term nowadays):

  • Application Pool Account = the account used to run the App Pool, whether custom user, NetworkService, LocalService, AppPoolIdentity or LocalSystem
  • ApplicationPoolIdentity = the new account type that has a unique App Pool Name-based identity SID (S-1-5-82-SHA1{App Pool Name})

Also, a reminder that process identity is the basic “RevertToSelf” identity for a process, and that thread identity can be different from process identity via impersonation or explicit logon.

So, any or all of the threads in a process might be someone other than the process identity, but if any call RevertToSelf or somehow lose their token, they’ll snap back to acting as the process identity. (Which is the ultra-short version of why you don’t want that being LocalSystem or another privileged account.)


App Pool Account:

The when-not-impersonating/process identity; used to start the app pool and to read web.config files; pretty much needs permissions to everything.

On IUSR vs Application Pool Account as anonymous:


  • IUSR has the same SID on every machine.
  • IUSR is appropriate if you run one anonymous website on the computer.
  • You secure your content to IUSR with NTFS permissions, and that website can access it.
  • If you run two websites with the anonymous account as IUSR, they can access each other’s content.
  • For low-security applications and intranet sites, that might be OK.

App Pool Account as Anonymous

The alternative is to use an App Pool Account as the Anonymous account (so a thread doesn’t bother putting on its IUSR clothes on anonymous requests)

  • ApplicationPoolIdentity has the same SID on every machine with a common config (because the SID is a hash of the name), so has the same benefit as IUSR for content security, only specific to the app.
  • It’s an appropriate choice if you run multiple anonymous websites and need isolation of content.
  • Other appropriate choice: creating an explicit user account for each App Pool and using that as anonymous.
  • (i.e. the anonymous Coke application should never be able to read the Pepsi application’s files) (arguably always the case with multiple anon websites on the one box)

Using the App Pool Account as anonymous is a good idea because it allows you to secure your content at the NTFS level for just COMPUTER\Coke or IIS AppPool\Pepsi, and be assured that Windows file system security will prevent one company’s anonymous app from reading (or otherwise affecting) its competitor’s anonymous content.

Using the AppPoolIdentity as the App Pool Account in this case is just a simple, no-hassle way of having a common user account on all machines that share the IIS configuration (or at least the name of the app pool), without having to faff about creating or replicating Windows users and worrying about their permission level.

The bit I’m less confident on but still fairly sure I’m right:

When it gets to off-box (eg database) resources, you’re out of IIS-land and into app framework (ASP.Net)-land; short version is that if your token isn’t delegable (for eg, comes from an NTLM auth), it’ll fail to be passed to the next hop, and you’ll end up with process identity and any limitations/benefits it incurs.

Configuring Kerberos for SharePoint farms – a generic gotchas list

Recently, I worked on a Kerberos configuration issue with a customer; these are my notes from the visit.

You’ll see some common themes with Kerbie Goes Bananas, and it puts much of that into practice. Speaking of, I must redo Kerbie with SetSPN -S  (shameface)


1. DNS should use an A record to refer to the load balancing IP, not a CNAME

This configuration step avoids an Internet Explorer behaviour whereby IE resolves a CNAME into an A record, and requests a ticket by building an SPN for the A record, instead of the CNAME.

More information is available at . In most cases, adjusting the behaviour of Internet Explorer across all machines is harder than adjusting the DNS entry involved.

2. SPNs must be registered against the Application Pool Account

Note: use the Windows 2008 (or later) version of SetSPN to identify problems such as duplicates when updating SPNs. Any existing document using SETSPN -A should be updated to use SETSPN -S.

Only two SPNs are required for Kerberos to function against a farm – the FQDN, and the short hostname.

These must be applied to the account used by the Application Pool receiving the user request, which practically means that in most cases, only one account is usable per hostname (pair).

SPNs to be registered are:


Against the user identity of the Application Pool the user is connecting to – say, DOMAIN\SPAccount. This must be a domain account when used in a Farm scenario.

Note that no port number is used for the default port, and that these SPNs are also used for TLS/SSL.


If the individual hostname is to be used occasionally (e.g. for troubleshooting), http/machinename and http/machineFQDN should be registered against that account as well.

This should result in a list of SPNs as shown:

setspn -l DOMAIN\SPAccount

Registered ServicePrincipalNames for CN=SharePoint App Pool Account,OU=Service Accounts,DC=example,DC=com:



3. The App Pool Account must be used for authentication

In a web farm scenario, a domain account must be used as the application pool identity. Once a suitable domain account is configured as the application pool identity (DOMAIN\SPAccount in this example), Kernel-Mode Authentication must be disabled, or the configuration’s useAppPoolCredentials property must be set to true (both may be used).

If this step is not performed, the app pool will not be able to decrypt the Kerberos ticket supplied by the client.

To disable Kernel-mode Authentication

Open InetMgr (IIS Manager), browse to Authentication for the site, click Windows Authentication and open Advanced Settings (Actions pane on the right), and untick “Use Kernel-mode Authentication”.


To set useAppPoolCredentials to true:

Open a CMD window as Administrator, then:

CD %windir%\system32\inetsrv

appcmd.exe set config -section:system.webServer/security/authentication/windowsAuthentication -useAppPoolCredentials:true

Note: one line (wrapped), with no space after any dash (-) character.


4. Performance – Kerberos and NTLM

Use of Kerberos should significantly reduce traffic between WFEs and Domain Controllers.

Every NTLM-authenticated connection requires the server to make a connection to a DC to complete authentication. The number of connections available to a DC simultaneously is governed by MaxConcurrentApi registry value.

Kerberos allows the client to authenticate to a DC once for the website, and to continue to use the ticket for the ticket lifetime (10 hours by default), across multiple connections, without necessarily needing to interact with the DC again.


MaxConcurrentApi (original article) (now supports 150)

Kerberos vs NTLM authentication with ISA Server (same concepts apply with Sharepoint or any Web app)

And a third-party performance comparison of Kerb and NTLM authentication with kernel-mode authentication and without was found here (not overall site performance, just basic RPS).

ISA 2000: The End Draws Near

While updating some documentation today and noticing it’s 2011 (when, exactly, did that happen?), I dug up the ISA Server 2000 Lifecycle information.

Paraphrasing the table here:

  Availability Mainstream Support Ends Extended Support Ends

Internet Security and Acceleration Server 2000 Enterprise Edition

18/03/2001 11/04/2006 12/04/2011

That’s right, kids, it expires on April 12 this year. (The date format is the *cough* correct *cough* UK/AU format above, naturally)

I have fond memories of ISA Server 2000. Actually, now I remember it, the memories were less “fond” and more around being confused by the task pane (I’m a right-click kinda guy), and the documentation, and whether packet filters were something applicable to publishing rules or not. Experience counted for a lot with it, and when it was released, it was a whole lotta new for everyone involved in using and supporting it.

ISA 2000 was where we originally derived the “two minute rule” from for ISA support (at least in Australia): When you’ve made a change, and you’re testing it, give it two minutes. (Saying that caused most type-A admin people to give it at least a minute, and a minute was usually enough for a change to percolate through the system).

I’d been a keen user of Proxy 2.0 at home and at work, on a very early cable modem implementation in Australia (see also: The Lane Cove Effect), and our geeky household upgraded through Windows 2000 betas with Proxy 2.0 patches, until finally ISA 2000 betas became available. Not too long after that, the release version was installed, glistening, on the low-spec former-work-desktop 486 we were using for routing and cheapie IIS hosting duties.

ISA 2000, you served us well. But your time is well and truly past. Bon voyage on the sea of retirement.

If you’re still using ISA 2000, and you’d like to try our new hotness, please try Forefront TMG 2010. The documentation’s better (and most ISA 2004/2006 documentation still applies), and it installs on current Windows versions. Thanks!