How SLAM retrieves the computer’s Local Admin password

Simple: SLAM doesn’t retrieve the computer’s Local Admin password – LAPS does!

SLAM is a Premier Operations Program offering (POP) for Securing Lateral Account Movement. It workshops credential theft mitigation (CTM) and counters lateral traversal with logon restrictions and firewall rules (among other protections)… but one key feature is deployment of LAPS, the Local Admin Password Solution.

So SLAM includes LAPS, and searching for how SLAM does something with passwords might not yield a result. (Hopefully “Until now…”). LAPS is quite well-documented, though, so answers are likely available.

POP-SLAM has been recently complemented by OA-SLAM (OA = Onboarding Accelerator), which is a more “let’s do it all in production”-style Microsoft Services offering.

How To (quickly) Tell If You’re 5 Years Out Of Date On Security Updates

There’s a fun indicator you can use to quickly evaluate whether you’ve been missing security updates for the last five years (ish) on older Operating Systems (i.e. Win2008-2008 R2), and it’s the build number. Not infallible, but then not often wrong.

Helpful Table Of Problem Versions

If you’d rather skip my rambling – and let’s face it, you should – here’s the list of build number indicators which might mean you have an update problem.

  • 10.0.14393.0 – Windows Server 2016 ships with a broken servicing stack which can’t talk to WSUS. (15 months out of date)
  • 6.3.9600.16xxx – (18xxx is current) means Windows Server 2012 R2 or Windows 8.1 without 2919355 applied (~3 years missing updates)
  • 7.6.7600 – Windows Server 2008 R2 or Windows 7 without Service Pack 1 (5 years missing updates)
  • 6.2.6001 – Windows Server 2008 Service Pack 1 (5 years missing updates)

By comparison, good build numbers (as of Sep 2017) are:

  • 10.0.14393.1670 – Win10 or Win 2016 with Sep 2017 CU (anything later than .187 is probably OK)
  • 6.3.9600.18xxx – Win2012 R2 post-2919355
  • 7.6.7601 – Win2008 R2 SP1 / Win7 SP1
  • 6.2.6002 – Win2008 SP2

You can use the WSUS console (yes, even if you’re using SCCM, though you probably have cooler methods available) to quickly evaluate build numbers across your fleet.
In a pinch, you can use AD Users and Computers if you’re just evaluating the third number in the sequence (i.e. doesn’t work for 2919355 or for Windows 2016 boxes).

Back In The Day, Build Numbers Were Even More Useful

Very helpfully, the Windows Vista era introduced incremental build numbers for Operating System versions when Service Packs were applied. So when it shipped, Windows Vista – which you’ll recall came out almost a year ahead of the server equivalent, Windows Server 2008 – shipped with the build number 6000.

Windows Server 2008 shipped with “Windows Vista” Service Pack 1 inbuilt, as it were, and so Vista SP1 and Windows Server 2008 SP1 (i.e. RTM) have the same build number, 6001.

Service Pack 2 followed, again incrementing the build number for both to 6002.

For the Windows 7 era, things were a bit more straightforward. Windows 7 and Windows Server 2008 R2 shipped at about the same time, as build 7600.

When Service Pack 1 was released for both, the build number incremented to 7601.

Quite a few of our Premier Security Assessments pull OS information using WMI from targets, and I sort by the self-reported build number to quickly identify groups of hosts which might not have a Service Pack. It’s very, very infrequently wrong. You could equally do the same by whether “Service Pack X” appears in the CSDVersion, but the build number is a nice, straightforward way of identifying this if you’re collecting it widely.

(AD Computer objects track what appears to be the same information, so querying AD might be a viable option if you’re reasonably certain that the computer objects there are still “live”).

What can you do with this information?

Well, you can say for sure that anything which self-reports as being build 7600 – i.e. not 7601 – probably hasn’t had any Windows security updates since about 2013.

The Support Lifecycle site notes that without SP1, Windows Server 2008 R2 (7600) exited support in April 2013. That’s the point after which security updates stop applying, because they require SP1 (7601), which isn’t installed.

Likewise, if you’ve a Windows Server 2008 (6001) Server, it hit End Of Support at the same time (and Service Pack 2 (6002) is required for any updates beyond that point).

If you haven’t got the relevant Service Pack approved in WSUS (or SCCM), the computers won’t even see updates beyond this point as being applicable. So it might seem like you’ve a bunch of completely updated and compliant servers, (on closer inspection finding lots of updates aren’t applicable to them) but if they haven’t taken the Service Pack, they’re only as updated as they self-report. And they know the newer updates aren’t for them.

In this case, “newer” means “pretty much everything since mid 2013”

What should you do?

So here’s what to do: Pull a report of the OS versions reported by servers within your environment. Clients too, if you think it’s possible some don’t have Win7 SP1.

You could do something like:

  • Start, Run, WinVer on a suspect PC (if it doesn’t say Service Pack X, problem)
  • PS:   get-adcomputer -Filter ‘(OperatingSystemVersion -like “*7600*”) -or (OperatingSystemV
    ersion -like “*6001*”)’ -Properties OperatingSystemVersion,OperatingSystemServicePack | export-csv NoServicePack.csv           #  (a blank NoServicePack.csv = good)
  • Or    wmic /node:servername os get version     – if WMI (RPC) is enabled to the target (in which case, extra bonus security points lost unless you’re using a PAW or management host – you should be firewalling!)
  • Or use WSUS: Turn on the Version column in the All Computers view in the WSUS console, then Group By (or just Sort by) Version and look at the build numbers reported. (Don’t forget to filter by Any)

If there are 7600s or 6001s found, check a few out, and just confirm that they’re not relevant-Service-Pack-less. (Best-case outcome: they’re being misreported.) If they are, try to work out and address the root cause – for eg, the Service Pack update wasn’t approved, or the WSUS catalog doesn’t include the update, or the PC isn’t in the right SCCM update group, or… whatever it is.

As a note, if you’re in that bucket, you’re likely to have many updates to apply, which will likely take some time and disk space to chew through. (If it’s simpler to redeploy an OS with a current build than update an older one, consider that).

And

And if you’ve found some unpatched boxes as a result of reading this, a) phew, lucky we found them now, and b) really think about that root cause. Mistakes in any human-driven process are predictable: does your process allow for mistakes and have any built-in correction for them? Update management isn’t always easy, but many update policies are geared towards fragility and failure, due to excessive process being required for an update to make it to the target box. A process failure without a corrective phase might result in updates being missed for years.

In some cases, what we hear is that some set of updates are initially rejected (or “deferred”) due to issues or concerns, which is fair enough – but then the decision doesn’t get revisited for months or years afterwards – sometimes never, until the update state is compared with Windows Update. If you don’t look back and check your assumptions – really test what updates are deployed and what you’re still vulnerable to – then things can rapidly and near-invisibly deteriorate, until suddenly, one day you’re looking back at 5 years of unpatched systems.

Core question: If the participants in your existing update process/policy had “just” been pointed directly at Windows Update and set to update weekly, how many Critical and Important updates might have been applied in the interim? Would the outcomes have been better?

And And: an afterthought for 2012 R2

I haven’t got into 2919355 yet, but it’s the 2012 R2 (and Windows 8.1) equivalent of a Service Pack, and as of late 2014, it became the mandatory update on which all other 2012 R2 (and 8.1) updates depended.

If you haven’t installed it, as with the older OSes above, updated would have stopped in – let’s say 2015. So you may be a couple of years behind by now.

I don’t know if it’s as simple as a build check for that one (it might be visible though the detailed build reported by  the WSUS console – I don’t have one to check right now), but it’s the other key update we find missing when evaluating update state using MBSA manually.

From a quick bit of KB spelunking, I figure there might be a way to tell from the WSUS reported client version (but it’d always be a “soft” confirmation) – check out the difference between the file information in the pre- and post-2919355 articles for the same update (while still in the grace period)

Pre (i.e. the version for computers without 2919355)

For all supported x64-based versions of Windows 8.1 and Windows Server 2012 R2

Post (i.e. the version information for computers with 2919355 installed already)

For all supported x64-based versions of Windows 8.1 and Windows Server 2012 R2

So I’ll hazard an ultra-hazardous guess, which is that if you have computers self-reporting in WSUS as being 6.3.9600.16xxx , they might have stalled at pre-2919355, so need 2919355 (or a descendent or prerequisite) approved, and then I assume the build number will be 17xxx or higher. MBSA can help you identify what Windows Update would think was missing, so you can search WSUS for approval states by KB ID.

Krebs’ Immutable Truths of Data Breaches

A rationale for more stringent risk assessment. Or indeed any risk assessment for internet connected assets, regardless of size or perceived value to others.

Krebs’s Immutable Truths About Data Breaches

“There are some fairly simple, immutable truths that each of us should keep in mind, truths that apply equally to political parties, organizations and corporations alike:

-If you connect it to the Internet, someone will try to hack it.

-If what you put on the Internet has value, someone will invest time and effort to steal it.

-Even if what is stolen does not have immediate value to the thief, he can easily find buyers for it.

-The price he secures for it will almost certainly be a tiny slice of its true worth to the victim.

-Organizations and individuals unwilling to spend a small fraction of what those assets are worth to secure them against cybercrooks can expect to eventually be relieved of said assets.”

Website Security Suggestion: Get rid of cruft! (script included)

Right: One of my pet hates is cruft on a production website.

Cruft is stuff – files – which has accumulated because nobody’s paying attention. Cruft includes sampleware. Developer experiments. Readmes. Sample configs. Backups of files which never get cleaned up. Just general accumulated stuff. It’s website navel lint. Hypertext hairballs.

Cruft. Has. No. Place. On. A. Production. Website!

Worst-case, it might actually expose security-sensitive information. (That’s the worst type of cruft!).

Want to find cruft? Well, easiest way to start is:

D:\WebContent> dir /s *.txt

That’s a good start. For every Readme.txt, add 10 points. For every web.config.txt, add 1000 points (why? That’s a potentially huge problem – .config is blocked by Request Filtering by default (with certain exceptions), but .config.txt: no problem! Download away.)

If you score more than 10 points, you need to rethink your strategy.

  • There is no reason for files like readme.txt to exist within your production website
    • Okay, there’s one reason and that’s when you’re providing one you know about, and have vetted, for download.
      • I mean, obviously if the site is there to provide readme.txt s for apps people are downloading, great! But if it’s the readme for some developer library which has been included wholesale, bad pussycat.
  • There is no reason for files like web.config.bak to exist within your production website.
    • Luckily, .bak files aren’t servable with the default StaticFileHandler behaviour. But that doesn’t mean an app (or * scriptmap…) can’t be convinced to hand you one…
  • If you have web.config.bak.txt files, you’re asking for trouble.
    • Change your operational process. Don’t risk leaking usernames and passwords this way.

The Core Rationale

Web developers and site designers should be able to explain the presence of every single file on your website.

I don’t care if it’s IIS or Apache or nginx or SuperCoolNewTechnologyX… the developers should be responsible for every single file deployed to production.

And before the admins (Hi!) get smug and self-satisfied (you still can, you just need to check you’re not doing the next thing…), just check that when you deploy new versions of Site X, you’re not backing up the last version of Site X to a servable content area within the new version of Site X.

For example, your content is in F:\Websites\CoolNewSite\ with the website pointed to that location…

  • It’s safe to back up to F:\Backups\CoolNewSite\2016-11-13 because it’s outside the servable website
  • It’s not cool to back up to F:\Websites\CoolNewSite\2016-11-13 because that’s part of the website.

How Do I Know If I’m Crufty?

As I do, I started typing this rant a while ago, and then thought: You know what? I should script that!

I had a bunch of DIR commands I was using, and sure, could’ve just made a CMD, but who does that these days? (Says my friend. (Singular))

Then {stuff}… but it finally bubbled to the top of my to-do list… So I wrote a first draft Get-CruftyWebFiles script.

I’ve lots of enhancement ideas from here, but wanted to get something which basically worked. I think this basically works!

Sure, there’s potential duplication if sites and apps overlap (i.e. the same file might be listed repeatedly) (which is fine; I figure you weed that out in post production), and if your site is self-referential it might get caught in a loop (hit Ctrl+C if you think/know that’s you, and *stop doing that*)

So, feel free if you want to see how crufty your IIS 7.5+ (assumed? Tested on 8.5) sites are:

The Script: https://github.com/TristankMS/IIS-Junk

Usage (roughly):

Copy to target web server. Then from an Admin PS prompt:

  • .\Get-CruftyWebFiles.ps1   # scans all web content folders linked from Sites, and outputs to .\crufty.csv
  • .\Get-CruftyWebFiles.ps1 -WebSiteName “Default Web Site”     # limits to just the one website.
  • .\Get-CruftyWebFiles.ps1 -DomainName “YOURDOMAIN”    # checks for that text string used in txt / xml files as well

Pull the CSV into Excel, Format as Table, and get sorting and filtering. Severity works on a lower-is-more-critical basis. Look at anything with a zero first.

Todo: Cruft Scoring (severity’s already in there), more detections/words, general fit and finish. Also considering building a cruft module for a security scanner, or just for the script, to check what’s findable on a website given some knowledge of the structure.

* oh! No I’m not

Simple IIS Kerberos Q&A

Posting a hopefully-useful tidbit.

Hi Tristan,

Do you have by any chance a guide on how to set up IIS for kerberos auth? I’m helping my customer and I’m a beginner with IIS.

It is a farm of 6 IIS servers, they will be using a service acct.

DNS is configured to do the following resolution:

Websvr -> CNAME -> IP

So for instance the web site is webapp.example.net and points to a CName. The CName obviously is an fqdn (app-prod-vip.example.net) that points to an IP.

The IP points to the VIP of a load balancer that ultimately connects to the IIS server farm.

When setting the SPN do we use the websvr or the CName?

Also, does it matter the browser I’m using on the client for kerberos auth (such as chrome)

Anything special on the web server, besides configuring Windows authentication?

Thank you!

 

Here’s what I replied with:

 

Hola!

Couple of moving parts there – it (a different name, i.e. the load balancer name) won’t work with the default configuration.

You’ll need to ensure that the SPN for the CNAME is only assigned to the service account running the App Pool. If it’s on more than one account, it’s broken.

A DA needs to run:

SetSPN -S http/cname-of-app.fqdn.com DOMAIN\AppPoolAccountName

Where DOMAIN\AppPoolAccountName is the service account you set up for the application.

And that should get kerb where it needs to be from an SPN perspective. If other SPNs have been tried already, they need to be removed (and SetSPN -S should tell you that).

(Once you’ve established an SPN for the account, the Delegation tab should appear for it in ADUC. This allows you to configure constraints or delegation, which you might not be doing, so we’ll cover that last.)

Next, you need to ensure the App Pool Account is set to DOMAIN\AppPoolAccountName (i.e. the same “custom” domain account) on all the boxes. (ApplicationPoolIdentity or NetworkService or LocalSystem or anything other than a Domain account won’t work for load-balanced Kerberos authentication.)

Then, you need to either

  • disable Kernel-mode authentication, or
  • set useAppPoolCredentials=true

on them all.

There’s a tickbox for K-mode auth under Windows Authentication in IIS; or useAppPoolCredentials goes (I think) in web.config so might be preferable. What either of these does is to move from using the box identity (machine account) to validate tickets, to using the App Pool Account to validate tickets. This is required for a farm scenario, but for a single-box scenario, it’s not necessary (only SPN registration).

Once that’s done, Kerb should work to the websites, which can be validated with a network trace, or by looking at logs. (let’s throw in a reboot after k-mode auth is toggled off for good measure) (Picking Kerb in logs – short version: single 401 www-auth:negotiate/request with long ticket/200 is Kerb, 401/401/200 is NTLM).

I’d always test with IE, I *think* if IE works then Chrome has a good chance. If it doesn’t, no chance Smile

Always test from a remote box (avoids reflection protection), and use klist purge (and a closed browser) to reset between tests.

If Kerb works to the site, you can then configure the App Pool Account in ADUC for constrained delegation to the next hop in the same way. Hit Add, browse for the process identity it’s connecting to (i.e. often the service account if the process is running as a domain identity, not the box name, but if not, the box name) and then pick the right SPN from the list.

HTH!

Tip: Check that your Offline Root CA is actually Offline, mmkay?

I spend a fair whack of time chatting PKI and certificates with customers, and tyre-kicking their environments as part of the Active Directory Certificate Services Assessment (or ADCSA – available via Premier Support).

Many customers have a fairly standard design, often deployed by a partner (it’s the “off the shelf plus customize” option), which includes an Offline Root CA, and one or more online Issuing CAs.

The Offline Root purely exists to sign Issuing CA certificates and publish a CRL occasionally, and is typically airgapped if it’s physical. The Issuing CAs are the ones which are typically connected to directly (or via a Web Service) by client computers.

What’s perhaps underemphasised in some designs is the inferred meaning of the word “Offline”. “Offline” means “no network cable”.

 

Pros of an Offline Root CA:

– Because it’s airgapped (i.e. has no network cable):

– You don’t have to service it (i.e. patch, service pack, update – except for reliability issues)

– You don’t have to closely manage its operational health – just boot it up once a quarter, copy a CRL to local storage, and shut it down

– It gets to use separate credentials from the rest of AD, so it’s isolated from credential attack

  – Also, remember, no network cable = limited network attack surface, right?

 

Cons of an Offline Root CA:

– Because it’s got no network cable:

– It’s harder to manage than an Online Root CA, which it becomes if you plug a network cable into it.

  – You need to use console access if it’s on a VM host (which, broadly, is an iffy idea at most organizations. Yes, probably including yours.)

  – You need to use virtual floppies, or real floppies, or probably more likely USB sticks to transfer files to and from it.

 

Okay, so you trade no network cable and hands-on management for greatly improved security, with the goal of keeping your root’s private key safe.

Airgapped is pretty safe: no network cable is a fairly heavy defence against the casual network-based attacker!

 

But…

But: at some point, due to error or oversight, manyOffline” Root CAs get attached to the network. Maybe because a new admin wants to RDP in for some reason, maybe because it’s more convenient when trying to publish the CRL. Whatever. Now, we can build a list of pros and cons for your Online Offline CA:

 

Pros and Cons of an Online Offline Root CA:

It’s one of the most vulnerable hosts on the network, because it’s not patched, because it’s not part of any patch group or configured for WSUS, and you don’t need to patch Offline CAs, right?

– Does it have antivirus or firewall or even local policy applied? Probably not, because it’s been designed not to be network-attached. (Not that settings or AV or software defences beat an unpatched exploit, but let’s suggest they might help with some common attacks.)

– It’s easier to manage, because it’s network-accessible. So, yay that!

 

The Good News:

Hah! Just kidding! If you’ve read carefully this far and think there’s any good news, my communication skills have failed you. There’s not, and depending on your other security practices and the assurance level required for certs issued by this CA, you should think carefully about starting over with provable key provenance.

If your Offline (unpatched!) CA has been Online for any period of time, the Root’s private key provenance is unclear. Could the machine have been exploited, and the private key leaked, undetected? Unpatched security vulnerabilities make that more likely. Lack of close auditing reduces assurance. The primary security control in place – i.e. no network cable – was removed.

But how useful is a root private key, really? Well, an attacker can simply use it to mint any certificate which it’s likely your organization’s computers will trust, for any purpose, undetectably. The use of such a cert would be basically invisible to your organization without unbelievably close monitoring, the type of unbelievably close monitoring you probably don’t have if an Offline CA ends up Online for any length of time.

 

So what do I do?

Check now. Don’t wait for me to show up and look horrified/disappointed! Or worse, tell jokes. You wouldn’t like me when I’m funny.

If you’re running a physical computer as an Offline CA, it’s pretty straightforward. It’s hopefully in a safe, or in another secured location where you can either prove it’s airgapped or it’s very easy to infer that it is (nobody’s in the safe; computer is airgapped).

If you’re running on a virtualized OS, it’s murkier. Maybe someone adjusted the virtual network settings; maybe it’s attached to a fake network; maybe it’s unattached completely. Virtualized CAs have many other security implications which need attention if you’re serious about your PKI.

So quick test: If you can RDP to the CA from an internal network client, it’s not airgapped. And it’s not Offline. And if it’s not Offline, and it’s not being updated with all haste at the start of each month, odds are it’s been vulnerable to known exploits for a period of time already (without restrictive firewall policies – but if you think about an RDP vulnerability, maybe even with restrictive firewall policies in place…).

There’s an extensive white paper on Securing Public Key Infrastructure which helps talk through many important aspects of CA security.

But this is something I’ve seen in the wild, and it’s scary.

So as a final note: if you come across something called “XXQ-BLAH-CA01” in a VM console and it doesn’t seem to be network connected: Leave it unplugged!

 

(Also, please avoid deleting it. No, there’s no story there. Why would you ask?)

Custom Password Filters

Back from holiday now, and almost over the jetlag. Almost.

A question came up today about Password Filter DLLs, and the documentation always seems to be hard to find, so I’ve popped up a quick summary of everything I know here.

Back In The Day of NT4, there was an optional component that Microsoft provided called PASSFILT.DLL that could be installed to perform password complexity checks. These days, equivalent functionality is built in to the base OS (since Windows 2000)(I.e. Windows 2000, 2003, 2008, 2012, 2016, etc etc).

Anyway, the problem is that the Platform SDK article Installing and Registering a Password Filter DLL makes the assumption that you want more security than Windows’ default password complexity check, and so lists the final step as being:

4. Ensure that the Passwords must meet complexity requirements policy setting is enabled.

If you’d written a filter that, say, only checked that the user wasn’t using their own name as a part of the password, and you wanted this check to be an additional check over the Microsoft built-in password complexity filter, this would be a Good Thing, because a password is only considered valid if it satisfies all installed password filters. It’s an AND relationship:

  • Filter1 must return true AND
  • Filter2 must return true AND
  • Filter3 must return true

So, all the filters run for every password change, and if they all say “yep, that’s fine with me”, then the password change is successful.

If you wrote a filter that checked for the word “Micro$oft” (or a 1337 derivative of your own company name) in a password, and rejected it if it was present, and followed the instructions at the above link, you’d have a system that would accept:

  • strong passwords (as defined by your Windows complexity policy)
  • that didn’t contain that particular word (as defined by your filter)

To extend the model, if your company had compiled a massive database of personal information on its employees,  you could similarly check that they weren’t using their wife’s name, blood type, social security number (Hello Americans!), dog’s name, daughter’s boyfriend’s name or brand of hair gel as a part of their password, and be assured that the password met Windows’ password complexity requirements… though slightly more seriously it’s a good idea to keep these things somewhat lightweight.

The Windows Password Complexity setting simply enables or disables the default “complex” Windows checks, so you don’t have to muck around with DLL installation and removal to get the regular “complex” stuff, it just sets a registry key (via policy). The Windows password filter is always installed and always runs to some extent, it just doesn’t always take action (depending on those registry settings).

Over the years I’ve worked with password filters, it’s (disappointingly) been reasonably common that some customers actually want reduced security in the password complexity space (often because it’s more difficult to upgrade legacy systems that can’t handle > 5 character passwords and lower case, or other similarly horrific constraints). As the alternative is “no password complexity” at the Windows filter level, we’re not really that flexible, and any security measure is potentially better than none.

If you’re coding a password complexity filter that is meant to replace rather than complement the Windows complexity checks, you need to disable the “Passwords must meet complexity requirements” setting to make yours the One True Password Filter (assuming no other custom filters are installed that make it impossible to produce a valid password… be careful with that too).

It’s worth calling out one other item around password filters – the error message received by clients isn’t configurable – the client always assumes the Windows password filter is in use, and is hard-coded to report the Windows complexity requirements (at least in part because there’s no mechanism that is used to explain to the client what the problem is.)

(Update 2017-04: There was a feedback link here, but… the behaviour didn’t change for 20 years, so odds are we’ve moved on from passwords. And if you can modernize your environment, perhaps you can too? Hello!) (in all non-glibness, consider an unlock gesture tied to a device a more authentic validation than a shared character string which many folks will surrender for a bar of chocolate…) (OK so that was a cheap 2004 reference, but you have a security awareness program in place, right?)