(ooh!) Foldershare Revamped!

(This post brought to you by the number 3, and the letters WHY AM I NOT SLEEPING?) 

It’s been a while – and suddenly: Pow! (and did I mention ooh?) A new Foldershare website! Has the feel of SkyDrive to it. And a big, prominent beta logo. Wonder if that hints that Foldershare might become the desktop client for Skydrive, at least in part? (previously, as I understand it, it was always client-to-client, no actual storage “in the cloud”, so you couldn’t get stuff unless at least one replica was switched on and logged in, but it’s possibly a short hop from there to SkyDrive being seen as an always-on repository…) (Juuust idle speculation. I’ve heard, seen, and know nothing. (Just ask anyone that works with me.))

Ooh again! A new FolderShare Satellite too (with Activity right on the main popup menu, yay! That initial sync is as addictive as watching an old-skool DOS defrag).

That’s about it. I see a few problem reports from the new beta in the comments on the Foldershare Blog , so if all is currently right with your file synchronization world, you might want to keep the old client install handy before upgrading.

Post-SP2 TCP Offload Fix

I’ve mentioned Chimney before. Now, a new Windows Update fix for TCP Offload, which turns it off.

It was on by default in Windows Server 2003 SP2, so if your NIC supported Offload, or RSS, or that other thing I can never remember, it was enabled.

But: we (PSS we) typically turn it off as a first troubleshooting step for any network-related issue –

a) because we know from experience that several drivers seem to do interesting things with it installed (that’s a nice way of saying update your drivers),

b) because several of our drivers do interesting things with it (if you’re going to choose to use it, check for recent-model tcpip.sys hotfixes), and

c) because we want to be able to see TCP traffic in a network capture for troubleshooting purposes.


Off-unless-opted-in brings parity with Windows Server 2008.

IIS7 Modules Aplenty – WebDAV, Bitrate Throttling

New modules, supported by Microsoft, are now officially RTMd (RTWd?) and available for use with IIS 7.0.


Yay new WebDAV! Yay being able to enable it on specific parts of a site! Yay better!

Robert: http://blogs.msdn.com/robert_mcmurray/archive/2008/03/12/webdav-extension-for-windows-server-2008-rtm-is-released.aspx


•    Microsoft WebDAV Extension for IIS 7.0 (x86)    http://www.iis.net/go/1621/
•    Microsoft WebDAV Extension for IIS 7.0 (x64)http://www.iis.net/go/1618/ 

Media Bitrate Throttling

Yay something about bandwidth for media files!

Vishal: http://blogs.iis.net/vsood/archive/2008/03/15/bit-rate-throttling-is-now-released.aspx


· 32 bit – http://www.iis.net/downloads/default.aspx?tabid=34&g=6&i=1640

· 64 bit – http://www.iis.net/downloads/default.aspx?tabid=34&g=6&i=1641


The Internet Information Services 7.0 (IIS 7.0) Media Pack – Bit Rate Throttling module provides the ability to throttle progressive downloads of media files (in which audio/video playback starts as soon as sufficient data has been buffered on the client) based on the content bit rate. For sites that deliver audio and video files that may not be watched in their entirety, this module could significantly reduce your media-related bandwidth costs. A secondary feature of the Bit Rate Throttling Module is that it can also be used to throttle non-media (“Data”) file types at specified bit rates.

Don’t Forget The New FTP Server While You’re At It

I already mentioned this, but I’ll list it here as a one-stop convenience (aww, aren’t I nice?)

Replaces FTP6 (that shipped in the box) with FTP7: FTP with SSL, virtual hostname support, extensibility, right-click-and-add-FTP-to-a-website publishing integration… loads of cool stuff.

  • Microsoft FTP Publishing Service for IIS 7.0 (x86)

  • Microsoft FTP Publishing Service for IIS 7.0 (x64)
  • Productivity Tip: Make A Y-Wing From Whiteboard Markers

    My Top Tips series* commences with the only not-really-computery tip I have to share: The Y-Wing.

    Are you sick of losing whiteboard markers?

    I was! We all were!

    In the turbulent, fast-paced environment that every-minute-counts problem solving creates, my workgroup found that tens of minutes a year were being spent searching for whiteboard markers.

    Every moderately complex case needs a whiteboard diagram (you can quote me on that), but what happens when you can’t find a marker? Worse still, when the only marker you can find turns out to be permanent!? Gasp!

    Our solution to this problem, hit upon while playing with marker lids that happened to interlock particularly well, was the Y-wing.


    While the design of this particular type of marker lid lends itself perfectly to the Y-wing layout, others may also work when taped.

    And if not, you can always cluster ’em in a circular-ish fashion, tape ’em up, and call them a Corellian cruiser (or something – what was the thingo at the start of the first movie with all the engines at the back? Are they seriously called "Blockade Runners"? Seems awfully specific for a ship that could probably get to the local supermarket and back with the groceries too. Wait, I’ve digressed again, haven’t I?)


    Aside: Pedants among you might note that technically, it’s more of a pixellated W (or perhaps an M if you’re dyslexic, or from the Northern Hemisphere), but there was no W-wing in Star Wars. Or M-wing. So it doesn’t sound as cool. And it’s close enough. Just back off, pal.

    How does it help? Well: by increasing the size of the marker object overall, and clustering all the whiteboard markers in the same spot, it’s both more discoverable/findable, and encourages returning the markers to their clustered form.

    It’s also way cool and can be used in space battles* against inferior whiteboard markers and other trinkets (such as the 5 Year Award Star Destroyer) between drawing episodes.

    Now, it’s easy to spot the hulking form of the Y-Wing amongst the desk clutter, and we’ve rounded out the fleet with other clustered whiteboard goods.

    Now if only I could stop the bastard that keeps stealing all the whiteboard dusters. (Andy, I’m talking to you.)

    “Stacking” NTLM Authentication

    This question came up today (well, actually, it was about four weeks ago I started typing this, but bear with me), and it’s been a little while since I’ve rambled about authentication protocols, so let’s enjoy a nice, calm discussion on a Monday Tuesday arvo.

    The request was something like:
    In a Web Publishing scenario, can I do NTLM at the ISA Server and NTLM at the Exchange server too?


    And the answer is – well, no.

    There’s no way for the client browser to distinguish between the ISA Server (first) saying 401 WWW-Authenticate: NTLM , and then the IIS Server saying 401 WWW-Authenticate: NTLM.

    Because it appears to be a repeated authentication sequence when the connection is already authenticated from IE’s perspective (and IE doesn’t think it’s talking to a different server), IE assumes there’s been an auth failure (why else would the server challenge again?).

    So, lots of authentication prompts are going to happen. The solution (as described) is not workable.


    With ISA 2006 and its amazingly-useful-how-did-we-ever-live-without-them Authentication features:

    What you could do is Integrated Windows Authentication at the Exchange server (i.e. allow Kerberos), and use protocol transition at the ISA Server, from whatever form of authentication you can accept from a client to Kerberos Credential Delegation (or even another protocol, depending on the auth method used by the listener).


    The question itself was a "no", but the question almost always isn’t actually the question. That one’s for free.


    Special note: I worked really hard on the headings for this post. I hope it was appreciated.

    MaxUserPort – what it is, what it does, when it’s important

    What can we say about MaxUserPort that hasn’t already been said? Not a lot, it would seem. He’s a beautiful dancer, perhaps? Ahh, such gentle humour, and nary a kitten drowned anywhere.

    But TCP port shenanigans are fairly frequently misunderstood, so let’s talk about the very basics of MaxUserPort.

    NB: This is all pre-Vista behaviour – applicable from NT4 through to Windows Server 2003, including all the little NT-flavoured stops on the way.

    NB 2 [2016-11-04]: But! The same principles apply to Windows Vista through to Windows 10 / Server 2016, and the MaxUserPort value seems to be supported, presumably for “legacy” purposes (eg, an app installer sets it, and it’s honoured), sooo… it should still work similarly. I think. YMMV. Always test. Hugs.

    MaxUserPort controls “outbound” TCP connections

    MaxUserPort is used to limit the number of dynamic ports available to TCP/IP applications. (I don’t know why , I just know it is . Probably something to do with constraining resource use on 16MB machines, or something.)

    It’s never going to be an issue affecting inbound connections.

    MaxUserPort is not the right answer if you think you have an inbound connection problem.

    To further simplify: MaxUserPort is typically going to limit the number of outbound sockets/connections that can be created.

    Note: that’s really a big fat generalization, but it’s one that works in 99% of cases.

    If an application asks for the next available socket (a socket is a combination of an IP address and a port number), it’ll come from the ephemeral port range allowed by MaxUserPort. Typically, these “next available” sockets are used for outbound connections.

    The default unmodified range for MaxUserPort pre-Vista was from 1024-5000 (so ~4000 ports), but the possible range – when modified – is up to 65534.

    (Vista+ default ephemperal port range is 49000-65535, so 3x the ports. See this.)


    Value Type: DWORD
    Valid Range: 5000-65534 (decimal)
    Default: 0x1388 (5000 decimal – when not set – see the notes for MS08-037 for an update on pre-Vista behaviour)

    When You Fiddle MaxUserPort

    So, why would you change MaxUserPort?

    In the web server context (equally applicable to other application servers or even client programs), you’d usually need to look at MaxUserPort when:

    – your server process is communicating with some type of other system {as a client}  (like a back-end database, or any TCP-based application server – quite often http web servers)


    – you are not using socket pooling , and/or

    – your request model is something like one request = one outbound TCP connection (or more!)

    In this type of scenario, you can run out of ephemeral ports (between 1024 and MaxUserPort) very quickly, and the problem will scale with the load applied to the system , particularly if a socket is acquired and abandoned with every request.

    When a socket is abandoned, it’ll (by default) take two minutes to fall back into the pool.

    Discussions about how the application/website design could scale better if it reused sockets rather than simply throwing new connections at each request tend to be unwelcome when the users are screaming that the app is slow, or hung, or whatever, so at this point, you’d probably have established that new request threads are hung waiting on an available socket, and just turn up MaxUserPort to 65534 and then hope your app doesn’t hit that as the next scale limiter.


    What Next? TcpTimedWaitDelay, natch

    Once MaxUserPort is at 65534, it’s still possible for the rate of port use to exceed the rate at which they’re being returned to the pool! You’ve bought yourself some headroom, though.

    So how do you return connections to the pool faster ?

    Glad you asked! – you start tweaking TcpTimedWaitDelay .

    By default, a connection can’t be reused for 2 times the Maximum Segment Lifetime (MSL), which works out to 4 minutes, or so the docs claim , but according to The Lore O’ The Group here, we reckon it’s actually just the TcpTimedWaitDelay value, no doubling of anything.

    TcpTimedWaitDelay lets you set a value for the Time_Wait timeout manually.

    As a quick aside: the value you specify has to take retransmissions into account – a client could still be transferring data from a server when a FIN is sent by the server, and the client then gets TcpTimedWaitDelay seconds to get all the bits it wants. This could be sucky in, for example, a flaky dial-up networking scenario, or, say, New Zealand, if the client needs to retransmit a whole lot… and it’s sloooow. (and this is a global option, as far as I remember).

    30 seconds is a nice, round number that either quarters or eighths (depending on who you ask – we say quarter for now) the time before a socket is reusable (without the programmer doing anything special (say, SO_REUSEADDR)).

    If you’ve had to do this, at this point, you should be thinking seriously about the architecturewill this scale to whatever load requirements you have ?

    The maths is straightforward:

    If each connection is reusable after a minimum of N (TcpTimedWaitDelay) seconds,
    and you are creating more than X (MaxUserPort) connections in an N second period…

    Your app is going to be spending time “waiting” on socket availability…

    (Which is what techy types call “blocking” or “hanging”. Nice*!)

    Fun* KB Articles:
    Event ID 4227


    I read items like this with interest.

    After recently flattening, installing, rebuilding or migrating a bunch of machines, I started to develop my “minimum customized set” that makes my life on a Windows box more comfortable.

    Jeff blogged a while back about how extensive customization was best avoided. I agree: I used to be a customization freak back when I had a single computer: I molded it to my personality, drew custom icons, special desktop backgrounds, ran WindowBlinds, the works. Now, I have three and a half home computers, two and three quarters work computers, and a reasonably high rebuild rate while testing beta-this and alpha-careful-it’ll-hose-your-machine that.

    Customization is a lossy art; little flecks detach and are lost as you move from computer to computer, profile to profile. I never have quite the same profile twice.

    But I started making a concerted effort to have a usable baseline this time around. Here’s a quick list of what my “do this on new computers” folder looks like now:

    Directory of C:\Users\tristank\Documents\!Sync\!Tweak

    05/03/2008  04:08 PM    <DIR>          .
    05/03/2008  04:08 PM    <DIR>          ..
    29/02/2008  12:24 PM               115 DownloadThese.cmd
    29/02/2008  12:12 PM               526 IEMaxConnectionsRegValues.cmd
    29/02/2008  12:05 PM                63 NotepadInSendTo.cmd
    29/02/2008  01:25 PM               237 OneNoteIconDefaults.cmd
    05/03/2008  03:52 PM                91 OutlookEmailTemplate.cmd
    05/03/2008  04:13 PM               147 Puretext.cmd
    05/03/2008  04:05 PM    <DIR>          resources
    05/03/2008  04:06 PM                36 ResourcesFolder.cmd
    04/03/2008  04:05 PM                46 WireShark.cmd
                   9 File(s)          1,261 bytes

    Basically, the minimum “oh, I need to go back and set that” set of customizations that I need to apply to a Windows machine I’ll be using for an extended period. Not exactly sexy, but it supports starting with any profile and modifying it, so the footprint’s tiny. Update: All non-EXE/BMP/WAV files are in a Skydrive folder here.

    I would (of course!) argue that all my customizations should be set by default, but then my needs ain’t the needs of the many.

    On a new computer, I just have to go get the Foldershare satellite, sync my main sync folder, then double-click for each customization.

    File stuff like my Outlook email template (with my custom “Debug Spew” monospaced uncoloured formatting style) and sound effects for server-side email rules are stored in the Resources folder (along with the Notepad.lnk file to be copied into %appdata%\Microsoft\Windows\SendTo)

    It’d be so much cooler (and unquestionably easier) to just shove a USB key into the monitor and have everything ready for me as I logged on, so I live in hope. The article above keeps that hope alive!

    A profile-on-a-stick is looking increasingly viable – someone mentioned a 16GB thumbdrive the other day, and if they double every 18 months, we’re pretty close to being able to store all my actually-needing-portability “Documents and Settings” for a while to come.

    Using a profile-in-the-cloud would solve many of my issues, but might cost a lot in bandwidth terms (and living here in Australia, bandwidth is still very expensive when talking about tens of gigabytes)…