ISA 2000: Web Publishing a .Net Web Service

There’s no real trick to it, apart from the good old Web Publishing authentication part.

 

Short version: Treat the published website exactly as you would a real web server, don’t try to treat it like a proxy (submitting proxy authentication credentials is not the right way to authenticate when you’re outside).

 

Longer but not exhaustive version:

 

The Client
From a Web Service client programming perspective (that’s a client connecting via the Internet (or more accurately, an ISA Server non-LAT network) to a web service published through an ISA Server), there’s absolutely nothing special you need to do. Just point yourself at the ASMX and you’re away.

 

You treat ISA Server as if it were a Web Server, because from the client’s perspective, it is a web server, not a proxy (so don’t try to submit proxy authentication credentials to it!). If the Web Service requires Web authentication (eg, you haven’t baked some custom token scheme into it), you program against it as if it were an IIS server requiring authentication.

 

Of course, if your client is behind a proxy server as well, you’ll need to treat that proxy as a proxy, if you need to do anything special to begin with (but that would apply to all web services, even those not behind an ISA Server).

 

The Server
Server authentication is where it can get mildly confusing – you potentially have multiple layers of authentication. ISA 2000 with Feature Pack 1 supports a feature called “Basic Credential Delegation”, which means that Basic credentials passed to ISA can then be forwarded to the back-end server (so ISA can pre-authenticate the user and allow/deny access to the Web Server before the user even sees the back-end server).

 

Preauthentication isn’t possible with the other authentication types, because the credentials aren’t submitted to the ISA Server (only a hash or verifier), so the ISA Server can’t forward the credentials to the back-end server. So, NTLM (Integrated) and Digest are out, leaving Basic. Basic transmits credentials in the clear, so you should SSL-encrypt the front-end connection if security of the user credentials is even vaguely important to you, as you can verify the target server is actually the server you want to give your credentials to, and the actual credential transfer is encrypted.

 

If you’re using Basic Delegation, I’d also strongly suggest using SSL bridging to encrypt the back-end as well, to protect the credential transfer between ISA and the internal Web Server. IPSec could also be used for this.

RSS: How Many Applications?

My token occasional “not actually related to my job” post, just typing to get my ideas down. I’m not employed as a programmer or PM, so expect questions, not policy statements!

 

I’ve put together a couple of RSS aggregator test apps in the past year or so, mainly to muck around with XML programming with .Net.

 

I must admit to being a bit confused about RSS/RDF. From a publisher’s perspective, easy, just create the content and let The Masses suck it. Whatever your content, The Masses can consume it, as long as they have an aggregator of the correct type, and it does whatever the consumer is focused on. It’s easy.

 

Most recently, I was toying with an RSS to NNTP server gateway application (I hate having to re-solve the Storage Problem, and like leveraging Outlook or other apps to do it for me) so that Outlook Express or Agent could consume it in their native tongue.

 

But where does RSS belong? Should OE consume feeds, or should Outlook? Does it really fit within one application? I’m not convinced I want all my feeds in Outlook. Should there be a completely separate feed engine? Should RSS (or Atom) be used for all applications, or should we be considering certain alternative derivative formats for types of content that don’t necessarily match the Blog space? Does every app need to understand its own brand of RSS (in which case, why not just any old XML)? Does it make the user’s life harder having a single format with multiple possible purposes and consumers?

 

Is this just an artificial distinction?

 

RssBandit and other standalone aggregators seem pretty good for surfing news, where you’re mainly interested in the headline and are happy with link clicking to pull the content, but generally I enjoy reading Blog content in Outlook for some reason (perhaps it’s more like a personal email to me that way?).

 

Subscribe to FlexWiki output, however, and you see a different class of feed – the really-not-for-the-user type of feed. Sure, it tells me that some Wiki was changed by some user – do I want an email? Would I rather have a pop-up notification and then ignore it (have optional persistent storage?). Likewise, if Sharepoint Alerts did RSS, I wouldn’t necessarily want them in Outlook, I’d want a similar point-in-time notification. As a human, I despise RSS messages that are light on meaningful content, tell my “what’s changed” application what’s new and different.

 

This might “just go away” when and if aggregators get the WinFS mojo.

 

Ideally, (as an aggregator) I’d like to be able to move content directly into a WinFS version of Outlook (note: hypothetical, unannounced product here), just by creating Message items in the file system (and putting them in an Email folder hierarchy, or something). So rather than using the current COM extensibility API for Outlook to store stuff indirectly through Outlook, and other APIs for other applications, the messages could be created and managed directly in the file system, and tagged as Message items associated with Outlook, or as short-lifespan Notification items associated with my cute sidebar tile, or, well, whatever. The aggregator could be standalone, but effortlessly (well, with minimal effort) integrate with any given Messaging application I desired that used WinFS. I could have my cake and eat it too. Other WinFS-based applications could receive similar treatment.

 

If I ever get the WinHEC Longhorn build onto one of my real machines, I’ll see what I can throw together. I like the idea, I don’t know if it’ll work in practice (I wonder if Longhorn Outlook Express uses WinFS in a way I can use?)

ISA 2000: Block Barry’s Access Except For One Site

Q: I need to block Internet access for Barry, except for one site.


A: As long as all users are required to authenticate when surfing, this is doable. You can specify exclusions using the Site and Content rules.

 

However, if any combination of (S&C and Protocol) rules is allowing anonymous access (anywhere), Barry may be able to get through; web browsers typically try to use anonymous connections before authenticating.

 

You Will Need:

 

A Destination Set (“Barry’s White List”): contains only
www.thealloweddomain.dom (and any other domains you do want Barry to access).

 

Protocol Rule(s) allowing access to HTTP/S.

 

Site and Content Rules something like this:

 

Allow (Domain Users) Anywhere Anytime
Deny (Barry) (All Sites Except Selected Destination Set: Barry’s White List)

 

or, if you’ve already got a “full privilege” user group segregated:

 

Allow (Internet Access Group) Anywhere Anytime
Allow (Barry) (Selected Destination Set: Barry’s White List) Anytime

 

More on Sasser, IPSec Firewalls, and SMB

I’ve had a couple of internal and external questions on the last post; rather than keep on flogging the earlier article, here’s some more background information on how this all works. I’ve been known to be wrong before, so please yell if you spot any mistakes or overgeneralizations.

 

Don’t Be Scared Of IPSec!

It’s not as complex as it first appears. Really. Use the policy file to have a look at it – using it for blocking is just like configuring any other firewall, only with more layers, tabs and dialogs!

 

Sasser

Sasser (A through D) attacks hosts by connecting to TCP port 445. There are other ports used for payload transfer, but 445 is the one that causes the reboot, and that it’s actually trying to exploit at the moment. All variants currently run an FTPD on 5554.

 

File Sharing Basics

Windows uses the SMB (Server Message Block) protocol for file sharing – when you connect a network drive, chances are you’re using SMB to do it. Files that comprise the bulk of Group Policy settings are pulled to the client using SMB; logon scripts are run using SMB. RPC can also be used over SMB.

 


Windows 9X and Windows NT support only NetBIOS for file sharing, and this is performed using the NetBIOS Suite, the business end of which is TCP port 139. I’ll call these “Legacy“ clients and servers for this discussion.

 

Windows 2000 and beyond (so Windows XP and Windows Server 2003 at this point) support the use of what’s called “Direct Hosted SMB over TCP/IP”, which is the mechanism that uses TCP port 445. It essentially avoids a NetBIOS layer when doing file sharing.

 

To support both methods, Windows 2000+ clients try to connect to file shares on both on port 139 and 445 simultaneously, and start working with whichever connection responds first, sending a TCP reset to the slower port (if it responds at all).

 

If you disable NetBIOS over TCP/IP, then you’re left with the direct hosted SMB stuff, so this effectively rules out using 139.

 

Typically, non-Legacy servers with File and Printer Sharing enabled operate with ports 445 and 139 incoming enabled, for compatibility with Legacy clients.

 

A Word On the NetBT Service

The NetBT service is NOT NetBIOS over TCP/IP alone. Don’t disable this service to try to disable NetBIOS over TCP/IP. If you’re going to disable it, use the radio button in Advanced TCP/IP properties, on a per-adapter basis. This service does more than just NetBIOS over TCP/IP, so please don’t touch, it’ll break other things as well.

 

IPSec as a Firewall

Using the IPSec driver to drop packets makes a very cheap (it’s in every copy of Windows 2000 Pro and above) and straightforward port-based firewall out of every desktop. In situations like Sasser, this means that blocking exploit ports becomes relatively straightforward (and once you’ve done it once, you’ll want to do it again).

 

You can block any protocol, port, single IP, range or subnet, incoming or outgoing. It’s not application-aware (so it’s not as flexible as the ICF or upcoming Windows Firewall – that is, it’s not a stateful firewall), but to block exploits and the odd worm, it’s as effective as it needs to be, in most instances.

 

That IPSec Policy

The IPSec Policy I put together (click that link to download it) prevents incoming connections on 445 (plus payload ports), and I’ve recommended it only be used for clients. In most cases, it can be used on servers too, but be aware of the risk if you’ve disabled NetBIOS over SMB on the server.

 

If you block 445 incoming on the clients, there’s low to no risk; the clients will simply drop a connection attempt made to port 445 on themselves. You may still be able to do file sharing if NetBIOS is enabled on the clients; if not, RPC may still work for remote management anyway (port 135, then a referral to an ephemeral port).

 

The key point is that by not blocking outbound 445, the client can still connect to File Shares using port 445. If NetBIOS over TCP/IP is enabled on the client and the server, it has two options for doing this, but if not, it’s not going to be able to connect any more. Sure, it can also try to infect other clients, but if every client is blocking port 445, that’s a lot less to worry about while you get SUS in place to patch the little blighters.

 

The risk of blocking outbound 445 is that if NetBIOS (port 139) is disabled on the client or its target server (say, a DC), then the client will not be able to pull down logon scripts, Group Policies, and other important bits and pieces – and if you can’t download Group Policy, you can’t use Group Policy to turn it off. And that’s why I’m hesitant to recommend blocking outbound connection attempts on the client, and inbound 445 on servers… it’s eliminating a possible leg to stand on (but can be applied safely if tested, just don’t go applying it to the whole domain).

 

Which Port?

Here’s a quick summary of what I understand the ports to be used for by Sasser, and the effect of blocking each:

 

Block 445 Incoming: computer will not be rebooted by exploit attempts (because they’re dropped) (Sasser A to D)

Block 445 Outgoing: exploited computer will not be able to connect to other computers to exploit them *but* may also break Group Policy and connection to file shares (see discussion above).

 

Block 5554 Incoming: exploited computer cannot have files copied from it.

Block 5554 Outgoing: same effect.

 

Block 9996 (A-C) and 9995 (D) Incoming/Outgoing: exploited computer won’t initiate the transfer (not really sure if this is the case).

 

Using IPSec Policies as a Firewall to Block SASSER Infection

Short version: Use an IPSec policy to configure a miniature firewall on each client (Windows 2000 and above) to stop SASSER reboots and buy time to deploy the patch.

 

Long version: The Sasser worm hits hosts on port 445 to infect them and crashes LSASS, which makes the box restart – which can be annoying if you’re trying to patch it at the time, to say the least, and can break it at worst.

 

If you have an AD infrastructure, you can quickly and easily use it to deploy an IPSec policy to the client machines that blocks port 445, as well as 5554 (the FTP server created by the worm) and 9996 (the remote command shell). Blocking any given port will stop the worm from being able to propagate, but if 445 is unblocked, you’ll still get rebooting clients until the MS04-011 patch is applied as the clients repeatedly whack each others’ LSasses. So, for workstations, blocking 445 seems like a reasonable path to take; you can still connect to file shares and RPC over the other SMB mechanism on 139, if NetBIOS over IP isn’t disabled.

 

I rolled an IPSec policy for client application only earlier (click that link to download it). Please don’t apply it to File Servers or Domain Controllers, just get them patched first. I’d rather not take the risk that other ports are blocked; these machines need to be as available as you can keep them. So, worst-case, patch them manually.

 

Why A Policy?

To buy you non-reboot, non-propagation time to apply the patch MS04-011. Once the patch is on, use a removal tool to eradicate the infection, then you’re done.

 

To Assign The Policy

To assign the IPSec policy, find the OU that has your workstations in it, then edit the computer Group Policy Object for that OU. If you don’t have an OU with your workstations in it (eg, not Servers, just client computers), now might be a good time to make one.


In a Group Policy object that applies to the OU, you want to go Computer Configuration -> Security Settings -> IP Security Policies on Active Directory.

 

You can only have a single IPSec policy assigned at a time, but the policy can comprise a whole bucketload of filter actions. If you import the policy above, it adds the filter actions to the pool, so you can add them to an existing IPSec policy if you like (if you’ve an existing assigned IPSec policy, I’ll assume you know what you’re doing when adding filters to it). Domain policies always override local policies if they’re defined.

 

If you’re looking at the default set of three policies when you start, you’ll notice a SASSER policy added there now. If you’ve not used IPSec before, this is the one you want to Assign. If you look at the properties, you’ll see two filter actions, which are block actions – I created one for the two payload ports, and one for inbound SMB 445. The payload ports are inbound and outbound blocking, but the SMB is only inbound, so you can still connect to file shares and (unfortunately) attack other hosts (if the other hosts are using the same policy, no harm done).

 

Once you’ve got the policy configured the way you want it, just Assign it, and all the computers in the OU will pick it up at their next policy refresh interval (up to 90 minutes by default).

 

Caveats:


  • If In Doubt, Don’t Apply This Policy.
  • Patch Your Servers First. Don’t Apply This Policy To Them.
  • This blocks the current infection and transfer method of the worm, but is not a substitute for patching. You Need To Patch. This may stop the constant rebooting though…
  • If you block port 139 as well, you will NOT be able to remotely manage the machine using RPC, nor will you be able to copy files to/from it. Terminal Server or Remote Desktop should still work, but you do NOT want to block 139 and 445 on DCs or File Servers. If in doubt, don’t apply it.
  • If you have a problem, unassign the policy and reboot the client
  • Usefulness depends on having the IPSec Policy Agent enabled on PCs the policy is applied to. Group policy can be used to tweak this as well, in the Security settings for Services.

The policy blocks inbound 445, and in/out 5554/tcp and 9996/tcp, so it prevents infection, payload transfer and remote control, in that order. It does not prevent outbound connections on 445 – this means that clients can still retrieve Group Policy and connect to file shares, but also means that an infected client can try to hit other infected clients. This isn’t as much of a problem if they’re all either patched or using this policy.

 

Removal

To remove the policy, just Un-assign it in the Group Policy Object (or Local Security Policy if you’re using it on standalone machines) and either reboot the clients, or wait for a policy refresh, or run


  • GPUPDATE /FORCE  for Windows XP+ machines, or
  • SECEDIT /REFRESHPOLICY MACHINE_POLICY /ENFORCE  for Windows 2000.

[Update] How This Compares To Other Solutions

This policy provides port blocking at every client that the policy is applied to. So, there are no infrastructural changes required to support it: if you have AD, you can roll out an IPSec policy, and turn the same policy off using AD when you’re patched up.

Blocking the port at the router/switches is also a possibility, though blocking at the router means that an infected computer’s local subnet is still able to keep itself busy – but keep in mind that you then need to identify servers you don’t want blocked by IP address rather than using AD Users and Computers or similar directory-based management tools, where you probably have the servers grouped in a nice convenient OU to begin with.

Blocking at the edge is hazardous; if an infected client gets plugged into the network somehow (or VPNs in, etc, etc), then unless you’re using IPSec or 802.1X to authenticate connections, or are patched, your network’s edge protection will not help.

 

FAQ

IPsec seems complex. Don’t I need a Public Key Infrastructure/shared key/whatever to use it?

NO!

This solution is actually not using IPSec, just using the IPSec driver to drop unwanted packets. We’re not implementing IPSec itself, we’re just configuring what could be thought of as a miniature firewall on every client. (as an aside, if XPSP2 were out and widely deployed, we’d have another option for doing this using a different set of policies).

 

Why not apply it to servers?

My cautious nature. Generally, SMB file sharing and RPC-dependent mechanisms can use SMB over 445, or NetBIOS-based SMB. If you’ve disabled NetBIOS over IP at the client OR the target server, I think that means that using this policy will stop SMB connectivity, which is required for Group Policy among other things. By all means experiment with the policy on local machines where you can turn it off, but deploying to servers via AD is probably The Wrong Choice.

 

Why does blocking 445 stop the reboots?

Because the LSASS vulnerability being exploited causes a reboot. Local infection is not known to cause a reboot, but being hit on port 445 by SASSER will.

 

And a follow-up post explaining a bit more about SMB, Sasser and IPSec Firewalling

 

 

This is why I like the idea of the firewall being on by default in Windows XP SP2.

 

Hope it helps someone.