LDAP Channel Binding and Load Balanced VIPs

Introduction and Background

Microsoft has been pushing companies to adopt settings to require signing and Channel binding for LDAP. If you are looking at this post, you already knew that. Signing and encryption typically mean people using port 636 (LDAPS or LDAP over TLS). Signing can also work on port 389 using STARTTLS.

Once the the change is implemented for Requiring LDAP Integrity, Simple/Non-Secure Binds would start to fail. Secure Binds on port 389/3268 will work and Binds using LDAPS (636/3269) will work.

Here is the Microsoft Article on the configuration change

https://support.microsoft.com/en-us/topic/2020-ldap-channel-binding-and-ldap-signing-requirements-for-windows-ef185fb8-00f7-167d-744c-f299a66fc00a

The mapping between LDAP Signing Policy settings and registry settings are included as follows:

  • Policy Setting: “Domain controller: LDAP server signing requirements”
  • Registry Setting: LDAPServerIntegrity
  • DataType: DWORD
  • Registry Path: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters
Group Policy SettingRegistry Setting
None1
Require Signing2

The mapping between LDAP Channel Binding Policy settings and registry settings are included as follows:

  • Policy Setting: “Domain controller: LDAP server channel binding token requirements”
  • Registry Setting: LdapEnforceChannelBinding
  • DataType: DWORD
  • Registry Path: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters  
Group Policy SettingRegistry Setting
Never0
When Supported1
Always2

Now the problem

A lot of organizations use Load Balancer VIPs to prevent users from connecting directly to individual Domain Controllers over LDAP using technologies like F5 LTM or Citrix Netscaler ADC.

With new changes for LDAP Channel binding, which is meant to prevent MiTM (Man-in-the-Middle) or Relay attacks, the load balancer may be seen as a Man-In-The-Middle and hence LDAP binds will fail for applications that support Channel Binding when Channel Binding is configured to “When Supported”. If it is set to always be required, all connections through a Load Balanced VIP will fail.

The only workaround I have found so far is to either have then talk to the Domain Controllers directly on port 636 OR to to disable Channel Binding on the Domain Controllers that are behind your LB VIP. The LDAP signing requirement works fine through a Load Balancer as long as there is a valid certificate with the correct SANs present.

Here is a sample Error you would get using LDP.exe when you try to connect to a Domain Controller using a simple bind on port 389. This is good since hopefully everything that you care about has been converted to use secure binds within your organizations (don’t want to minimize it, took months of effort).

———–

res = ldap_simple_bind_s(ld, ‘MYDOMAIN\myldapuser’, <unavailable>); // v.3

Error <8>: ldap_simple_bind_s() failed: Strong Authentication Required

Server error: 00002028: LdapErr: DSID-0C090276, comment: The server requires binds to turn on integrity checking if SSL\TLS are not already active on the connection, data 0, v2580

Error 0x2028 A more secure authentication method is required for this server.

———–

If you are connecting through some kind of reverse proxy or Load Balancer, this is the error you would get from ldp.exe (because the channel binding token will not match between client and DC)

53 = ldap_set_option(ld, LDAP_OPT_ENCRYPT, 1)

res = ldap_bind_s(ld, NULL, &NtAuthIdentity, NEGOTIATE (1158)); // v.3

{NtAuthIdentity: User=’myldapuser’; Pwd=<unavailable>; domain = ‘MYDOMAIN’}

Error <49>: ldap_bind_s() failed: Invalid Credentials.

Server error: 80090346: LdapErr: DSID-0C090590, comment: AcceptSecurityContext error, data 80090346, v2580

Error 0x80090346 Client’s supplied SSPI channel bindings were incorrect.

Additional Reading

RFC 5056 https://tools.ietf.org/html/rfc5056

From https://www.hub.trimarcsecurity.com/post/ldap-channel-binding-and-signing

“LDAP channel binding is the act of tying the TLS tunnel and the application layer (leveraged by LDAP) together to create a unique identifier (channel binding token) for that specific LDAP session. This channel binding token (CBT) can only be used within that TLS tunnel and therefore prevents a “stolen” LDAP ticket from being leveraged elsewhere.”

RFC 2743 https://tools.ietf.org/html/rfc2743

1.1.6:  Channel Bindings

   The GSS-API accommodates the concept of caller-provided channel
   binding ("chan_binding") information.  Channel bindings are used to
   strengthen the quality with which peer entity authentication is
   provided during context establishment, by limiting the scope within
   which an intercepted context establishment token can be reused by an
   attacker. Specifically, they enable GSS-API callers to bind the
   establishment of a security context to relevant characteristics
   (e.g., addresses, transformed representations of encryption keys) of
   the underlying communications channel, of protection mechanisms
   applied to that communications channel, and to application-specific
   data.

   The caller initiating a security context must determine the
   appropriate channel binding values to provide as input to the
   GSS_Init_sec_context() call, and consistent values must be provided
   to GSS_Accept_sec_context() by the context's target, in order for
   both peers' GSS-API mechanisms to validate that received tokens
   possess correct channel-related characteristics. Use or non-use of
   the GSS-API channel binding facility is a caller option.  GSS-API
   mechanisms can operate in an environment where NULL channel bindings
   are presented; mechanism implementors are encouraged, but not



Linn                        Standards Track                    [Page 16]

 
RFC 2743                        GSS-API                     January 2000


   required, to make use of caller-provided channel binding data within
   their mechanisms. Callers should not assume that underlying
   mechanisms provide confidentiality protection for channel binding
   information.

   When non-NULL channel bindings are provided by callers, certain
   mechanisms can offer enhanced security value by interpreting the
   bindings' content (rather than simply representing those bindings, or
   integrity check values computed on them, within tokens) and will
   therefore depend on presentation of specific data in a defined
   format. To this end, agreements among mechanism implementors are
   defining conventional interpretations for the contents of channel
   binding arguments, including address specifiers (with content
   dependent on communications protocol environment) for context
   initiators and acceptors. (These conventions are being incorporated
   in GSS-API mechanism specifications and into the GSS-API C language
   bindings specification.) In order for GSS-API callers to be portable
   across multiple mechanisms and achieve the full security
   functionality which each mechanism can provide, it is strongly
   recommended that GSS-API callers provide channel bindings consistent
   with these conventions and those of the networking environment in
   which they operate.

Kerberos Anomaly with MacOS and 2012 R2

Caution: This is a rambling post..

I set up a fresh lab environment at home, joining my MacOS and Linux Machines to the Domain. I was getting a message on my Macbook Terminal after running kinit, saying Encryption Type arcfour-hmac-md5(23) used for authentication is weak and will be deprecated.

Given that Pen Testing exercises is one of the things I hope to do on this network, that got my attention.

Ran TCPDUMP on the terminal to capture kerberos traffic

sudo tcpdump port 88 -s0 -w kerb.cap

I noticed something interesting.. I was seeing an error message and then it was switching to TCP.
User Datagram Protocol, Src Port: 88, Dst Port: 63161
Kerberos
krb-error
pvno: 5
msg-type: krb-error (30)
stime: 2020-07-05 00:13:19 (UTC)
susec: 848138
error-code: eRR-RESPONSE-TOO-BIG (52)

After switching to TCP, AS-REQ would get back an AS-REP message


Transmission Control Protocol, Src Port: 88, Dst Port: 53337, Seq: 1449, Ack: 252, Len: 48
[2 Reassembled TCP Segments (1496 bytes): #9(1448), #10(48)]
[Frame: 9, payload: 0-1447 (1448 bytes)]
[Frame: 10, payload: 1448-1495 (48 bytes)]

[Segment count: 2]
[Reassembled TCP length: 1496]
[Reassembled TCP Data: 000005d46b8205d0308205cca003020105a10302010ba310…]
Kerberos
Record Mark: 1492 bytes
as-rep
pvno: 5
msg-type: krb-as-rep (11)
crealm: CORP.IOSOL.NET
cname
ticket
tkt-vno: 5
realm: CORP.IOSOL.NET
sname
enc-part
etype: eTYPE-AES256-CTS-HMAC-SHA1-96 (18)
kvno: 2
cipher: a6e9ca2098672c82579b167f8bb6d7c4aea5e7a006d26980…
enc-part
etype: eTYPE-ARCFOUR-HMAC-MD5 (23)

kvno: 1
cipher: f3c56007743c4ceac9d04550b3752ee9dd2f22f33a21d9b0…

Now 2 things.. One is the re-assembled PDU, so I get that UDP couldn’t send it because it would get fragmented, and TCP did get 2 frames that it had to assemble, but it can manage such a thing. The question was why. The other issue was what is the second enc-part that is using RC4-HMAC-MD5? My computer had just told it in the AS-REQ, that it supported AES256-CTS-HMAC-SHA1? Anyhow we will dive into https://tools.ietf.org/html/rfc4120#section-3.1.3 the RFC later, for now what is the deal with the packet size?

Searching for eRR-RESPONSE-TOO-BIG (52) brought up this link that was talking about UDP packet size per OS https://social.technet.microsoft.com/Forums/en-US/4ce35807-f002-4dde-8e94-90b63642a02e/krb5kdcerrpreauthrequired-and-krb5krberrresponsetoobig?forum=winserverDS and https://social.technet.microsoft.com/Forums/en-US/acf9484d-b059-4230-b428-533cc148e5ae/kerberos-error-52-response-too-big-on-macos?forum=winserversecurity. Someone on the second post was talking about the shutting down the 2012 R2 DC, so I went about my other task or adding my second domain controller to the Domain, this one running Server 2019 Core.

After adding the second DC, updating my client DNS to point to the new DC and transferring the FSMO roles, I shut down the 2012 R2 DC and Kerberos Authentication started working over UDP. I checked the box on the user account that the account supports AES is why it switched to UDP.

Find out that the default UDP packet size for 2012 R2 is 1514 bytes, my Mac and Wireless Router have MTU set to 1500. That would explain the Reply too big error. I think Windows 2012R2 (with all the latest patches as of July 2020) has a mismatch between the OS MTU and the MaxPacketSize / MaxDatagramReplySize. More to investigate there on how to fix it for 2012 R2 DCs (even though 2012 should be gotten rid of in all corp environments). Here is a Microsoft reference regarding KDC Registry Settings

Meanwhile, back at the ranch, there was the mystery of why the RC4 enc-part in the response? I decided the simplest thing to do would be to try checking the box that says this client supports AES 128/256 in the AD User properties and log back on. This seemed to make the AS_REP response come back with AES instead of the RC4 part.

Before

Request:

Response:

After

Looking at my packet captures, I am not sure if this started after I updated my resolv.conf or after I shut down the 2012 R2 Domain Controller, but I think some setting specific to the 2012 DC is what caused the RC4 response.

According to the RFC ..

   In response to an AS request, if there are multiple encryption keys
   registered for a client in the Kerberos database, then the etype
   field from the AS request is used by the KDC to select the encryption
   method to be used to protect the encrypted part of the KRB_AS_REP
   message that is sent to the client.  If there is more than one
   supported strong encryption type in the etype list, the KDC SHOULD
   use the first valid strong etype for which an encryption key is
   available.

   When the user's key is generated from a password or pass phrase, the
   string-to-key function for the particular encryption key type is
   used, as specified in [RFC3961].  The salt value and additional
   parameters for the string-to-key function have default values
   (specified by Section 4 and by the encryption mechanism
   specification, respectively) that may be overridden by
   pre-authentication data (PA-PW-SALT, PA-AFS3-SALT, PA-ETYPE-INFO,
   PA-ETYPE-INFO2, etc).  Since the KDC is presumed to store a copy of
   the resulting key only, these values should not be changed for
   password-based keys except when changing the principal's key.

   When the AS server is to include pre-authentication data in a
   KRB-ERROR or in an AS-REP, it MUST use PA-ETYPE-INFO2, not PA-ETYPE-
   INFO, if the etype field of the client's AS-REQ lists at least one
   "newer" encryption type.  Otherwise (when the etype field of the
   client's AS-REQ does not list any "newer" encryption types), it MUST
   send both PA-ETYPE-INFO2 and PA-ETYPE-INFO (both with an entry for
   each enctype).  A "newer" enctype is any enctype first officially
   specified concurrently with or subsequent to the issue of this RFC.
   The enctypes DES, 3DES, or RC4 and any defined in [RFC1510] are not
   "newer" enctypes.

I




Neuman, et al.              Standards Track                    [Page 25]

RFC 4120                      Kerberos V5                      July 2005


   It is not possible to generate a user's key reliably given a pass
   phrase without contacting the KDC, since it will not be known whether
   alternate salt or parameter values are required.

   The KDC will attempt to assign the type of the random session key
   from the list of methods in the etype field.  The KDC will select the
   appropriate type using the list of methods provided and information
   from the Kerberos database indicating acceptable encryption methods
   for the application server.  The KDC will not issue tickets with a
   weak session key encryption type.

I think something else is at play here

1.5.2.  Sending Extensible Messages

   Care must be taken to ensure that old implementations can understand
   messages sent to them, even if they do not understand an extension
   that is used.  Unless the sender knows that an extension is
   supported, the extension cannot change the semantics of the core
   message or previously defined extensions.

   For example, an extension including key information necessary to
   decrypt the encrypted part of a KDC-REP could only be used in
   situations where the recipient was known to support the extension.
   Thus when designing such extensions it is important to provide a way
   for the recipient to notify the sender of support for the extension.
   For example in the case of an extension that changes the KDC-REP
   reply key, the client could indicate support for the extension by
   including a padata element in the AS-REQ sequence.  The KDC should
   only use the extension if this padata element is present in the
   AS-REQ.  Even if policy requires the use of the extension, it is
   better to return an error indicating that the extension is required
   than to use the extension when the recipient may not support it.
   Debugging implementations that do not interoperate is easier when
   errors are returned.

Looking at the old request, the PADATA field was using RC4, potentially because it was not able to resolve the name of the DC?

The new request sent the PADATA encrypted using AES-CTS-HMAC-SHA1, so something changed at the client end. Going back to the 2012 DC and shutting down the new DC to observe the behavior revealed that it was not specific to the DC.

So staying with the 2012 R2 DC and removed the /etc/resolv.conf entries for DNS. Instead just using the ones in the wireless Network Connection

Here is what I realized in the end – when the Boxes for an account are unchecked .. the KDC_ERR_PREAUTH_FAILED Message returns what kind of key it has for the user, which is what causes it to use RC4, it had nothing to do with the Operating system. But when the user has AES Support checked in account, it returns AES as the chosen etype, which leads to the packet being small enough that it can be sent over UDP. There seems to be some confusion when i use different DCs , I get a TCP response event when using AES so not sure about the TCP/UDP part, but at least I can say that the RC4 mystery is solved.

Authenticating as Sunil againdt 2019 DC and first 4 events show the flow.. as you can see event 4 happens over UDP, and etype for enc-part is AES.

Trying to authenticate as Administrator@CORP.IOSOL.NET, PREAUTH REQUIRED message returns INFO2 entry with eType of ARCFOUR (RC4), and subsequent AS-REQ uses RC4 etype for the PA-ENC-TIMESTAMP, which leads to the DC returning a RESPONSE TOO BIG error, leading to retry over TCP.

I may revisit this post and update it later

Detecting Basic Auth for Office 365 using Okta Logs

Background

What is Basic Auth

If you found this article, you probably already know this part. But Basic auth is traditional authentication that relies on username and password being sent over HTTP(S). This has a weakness that it can be susceptible to various forms of attacks by malicious actors like credential stuffing and password spraying.

Why it matters

Corporate resources are exposed to the internet (Mail, Cloud Services, etc.).  and not having MFA means if someone could guess one weak password for a single user, a malicious actor can start to get a foothold in your network and start moving laterally until they potentially control everything.

How to Protect

The mitigation for these kinds of attacks is to use MultiFactor Authentication (MFA) which is available using modern authentication/authorization techniques like SAML/WS-Fed or OAuth, which rely on an identity provider to validate the end user and grant them a token to authenticate to an application, This allows for adding MFA to validate the user during the authentication process. Just knowing a user’s username + password is no longer sufficient, you also need to have access to something that they have (their phone, a hardware token, etc). Enforcing MFA significantly enhances security.

The Problem

Modern Auth, only works for the clients and protocols in Office 365 that support it, and is not compatible with several protocols and clients. The choice then, is to either continue supporting  is for everyone (since we do not know who needs to use Basic Auth and who doesn’t). If you don’t disable Basic Auth in Office 365, your MFA will not be very helpful in protecting your security, since it can be bypassed. Basic authentication has to be completely disabled to get the benefit of your MFA implementation.

Solution

The better, although tougher alternative is to identify users that have a need for  basic auth, and disable Basic Authentication for everyone except those users. Before going there we need to identify who is currently using Basic Auth, which is what this post is about (specifically focused on Office 365, there are of course several other possible scenarios). Here are the items we will cover:

  1. Identify Applications Using Basic Auth to access Office 365
  2. Identify Users who will be impacted if Basic Auth is disabled
  3. Test Disabling Basic Auth with Pilot Group representative of overall population and note the issues
  4. If a user in the pilot group has trouble, be prepared to change their authentication policy to re-enable Basic Auth. Record the users and scenarios that require Basic Auth
  5. Update the client software on Corporate Devices as needed (Desktops and Mobile).
  6. Notify users to update their unmanaged clients

If you have the option of updating all Clients Computers (Windows/Mac) – you should do it, because it is the simplest and best option. In addition we need to communicate to our users that they need to update their mobile clients to use modern auth and the date support will be dropped (and what will possibly break). You will also need to identify the mail enabled service accounts that need to be able to continue authenticating using basic auth.

If you happen to be in a scenario where all mail clients cannot be updated for some reason,  you could start a phased approach – disable Basic Authentication for users that we know will not be impacted by it (communication is still important – both to end users as well as support staff, including how to assist users in either resolving the issues or rolling-back the policy for those users.

The technical process of Disabling Basic Auth for users is pretty simple in Office 365, using Powershell. One of the big issues ends up being trying to identify all the clients in your environment that are using Basic auth and will be impacted by the change and upgrading or excluding them from that phase of your project.

The details on how to create the authentication policies in Office 365 and assigning policies to disable Basic Auth for users are explained well in this Microsoft Article.

Office 365/ Azure AD does not give you the sign-in information for which users and clients are using basic auth, unless you are using AzureAD as your IdP and/or own a Premium Azure AD Subscription (P1/P2). This can be an overkill if the only useful information you want is to identify which clients are using Basic Auth. This means that the information would have to be extracted from your IdP. Let’s look at how to do this in Okta.

Query Okta for Basic Auth information:

Login to Okta Admin Console

Go to Reports->System Log and search for

authentication_context.credential_type eq “PASSWORD” and debug_context.debug_data.request_uri sw “/app/office365/<appid-string>/sso/wsfed/active”

I would recommend querying at least 2 weeks worth of data if not longer and then exporting it to CSV. This may end up being anywhere from several MB to several GB depending on your organization size and utilization of the queried authentication method.

Once you have the export from Okta, the next task is to create a pivot table and find unique usernames by client type. MSRPC for example should give you Outlook clients using RPC over HTTP.  There will also be other entries by Outlook version. Once you find out the Users that are authenticating with older methods and machines with Older versions of Outlook (or not configured for ADAL as with default config for Outlook 2013). This should start the effort for upgrading internal clients. Mobile users will need to be told that they may need to re-setup their mail.

Sending Citrix Netscaler ADC Traffic TLS Statistics to your SIEM

This will be a quick post .. essentially a replica of what I posted on Citrix Forums on how to capture your traffic statistics from your Netscaler to a syslog server

Using 192.168.1.50 as my Syslog server placeholder, replace this with your syslog Server’s IP
Setting userDefinedAuditlog is required to get our custom messages to a log server, pick whatever other Levels you would like forwarded. This will need to be run on the Netscaler over an SSH session

2020-07-02T04:53:31.786Z 192.168.3.200 <130> 07/02/2020:04:53:31 GMT lb01-test 0-PPE-0 : default RESPONDER Message 2685782 0 : “Server: test-fe.domain.com Client: 192.168.5.61 Client_SSL_VERSION: TLS_12 CIPHER_SUITE: TLS1-AES-256-CBC-SHA VSERVER: TEST-FE RESPONSE_TIME 0 HEALTH_PERCENT 100 CONN_COUNT: 2”

2020-07-01T07:41:57.857Z 192.168.3.201 <130> 07/01/2020:07:41:57 GMT lb01-test 0-PPE-1 : default RESPONDER Message 1953344 0 : “Server: testvip2.domain.com Client: 192.168.5.61 Client_SSL_VERSION: TLS_12 CIPHER_SUITE: TLS1.2-ECDHE-RSA-AES256-GCM-SHA384 VSERVER: VSVR-TESTVIP2-443 RESPONSE_TIME 62929 HEALTH_PERCENT 100 CONN_COUNT: 3

add audit syslogAction act_syslog_forward 192.168.1.50 -serverPort 514 -logLevel EMERGENCY ALERT CRITICAL ERROR WARNING -userDefinedAuditlog YES
add audit nslogAction act_nslog_forward 192.168.1.50 -serverPort 514 -logLevel EMERGENCY ALERT CRITICAL ERROR WARNING -userDefinedAuditlog YES
add audit syslogPolicy pol-syslog-forward true act_syslog_forward
add audit nslogPolicy pol-nslog-syslogserver true act_nslog_forward

add policy stringmap CLIENT_SSL_VERSION -comment "SSL Version StringMap"
bind policy stringmap CLIENT_SSL_VERSION 771 TLS_12
bind policy stringmap CLIENT_SSL_VERSION 772 TLS_13
bind policy stringmap CLIENT_SSL_VERSION 770 TLS_11
bind policy stringmap CLIENT_SSL_VERSION 769 TLS_10
bind policy stringmap CLIENT_SSL_VERSION 3 SSL_3
bind policy stringmap CLIENT_SSL_VERSION 2 SSL_2

add audit messageaction TLS_logging CRITICAL "\"Server: \"+HTTP.REQ.HOSTNAME+\" Client: \"+CLIENT.IP.SRC+\" Client_SSL_VERSION: \"+CLIENT.SSL.VERSION.TYPECAST_TEXT_T.MAP_STRING_DEFAULT_TO_KEY(\"CLIENT_SSL_VERSION\")+\" CIPHER_SUITE: \"+CLIENT.SSL.CIPHER_NAME+\" VSERVER: \"+ HTTP.REQ.LB_VSERVER.NAME + \" RESPONSE_TIME \"+HTTP.REQ.LB_VSERVER.RESPTIME" -logtoNewnslog YES
add responder policy TLS_version_loging true NOOP -logAction TLS_logging

At the moment having trouble getting the nsLog to forward Syslog Server, but keeping it. This is referred to as “NewNSLog” in the UI

bind audit nslogGlobal -policyName pol-nslog-syslogserver -priority 100

Forwarding the Syslog to a syslog server works

bind system global pol-syslog-forward -priority 100

This will bind the policy to the VIP, which will start logging. Do this for all your VIPs you would like to get TLS stats for

bind lb vserver VSVR-TESTVIP-443 -policyName TLS_version_loging -priority 100 -gotoPriorityExpression NEXT -type REQUEST

Assuming this Netscaler is in the DMZ .. make sure it has a path to get this data to the Syslog Server.
Troubleshooting – go to the Linux Shell by typing “shell” at the netscaler prompt, running this will show UDP packets being sent to the syslog server

/netscaler/nstcpdump.sh -X dst host 192.168.1.50 and port 514

Netscaler Nitro API Powershell Module

I have started contributing to the PS-NITRO repository on github which is currently managed by CognitionIT, my changes are not merged into that repo at this time, but here is a link to the project on my Github Repository https://github.com/closedstack/PS-NITRO

I have also added some scripts that might be useful to other folks like collecting Inventory of VIPs, certificate bindings, servers, ports, etc. Also added some functions for folks that are interested in getting realtime statistics for Load Balancer VIPs and posting to a time series database like InfluxDB.

I will be posting more, but do check it out and let me know what you think or if you have questions.

 

Get Airwatch Cloud MDM Inventory

The intent of this post is not to show every possible function possible or every function that I do managing MDM, but mostly to give people a starting point for getting information out of Airwatch aka Workspace ONE. For the latest on current APIs, check out https://as135.awmdm.com/api/help/#!/apis

$awBaseURL = 'https://as000.awmdm.com'
#region creds
 $apiKey = ''
 $userName = 'readonlyuser'
 $pass = 'password'
 #endregion
 $concateUserInfo = $userName + ":" + $pass
 $encoding = [System.Text.Encoding]::ASCII.GetBytes($concateUserInfo)
 $encodedString = [Convert]::ToBase64String($encoding)
 $restUserName = "Basic " + $encodedString
$headers = @{
 Authorization  = $restUserName
 'aw-tenant-code' = $apiKey
 'Accept' = 'application/json;version=2'
 'Content-Type' = 'application/json'
 }
#Devices
 $uuid=''
 $URLSuffixDeviceSearch = '/api/mdm/devices/search'
 $URLSuffixDeviceInfo = ('/api/mdm/devices/' + $uuid)
 $reqURI = $awBaseURL + $URLSuffixDeviceSearch
 $res = Invoke-RestMethod -Uri $reqURI -Method GET -Headers $headers
 $alldevices = $res.devices
 $enrolled = $alldevices | where {$_.EnrollmentStatus -eq 'Enrolled'} | select -Property UserName, EnrollmentStatus, UserEmailAddress, Platform, Model, `
    OperatingSystem, LastSeen, SerialNumber, MacAddress, Udid, AssetNumber, DeviceFriendlyName, LocationGroupName, ComplianceStatus, `
    CompromisedStatus, LastComplianceCheckOn | sort -property UserName
 $EnrolledUsers = $enrolled.userName | sort | select -unique

Office 365 – Holiday Calendar

Simple ideas sometimes aren’t as simple to implement as the ask makes them sound. Got one such ask not too long ago.

I work for a global company that has offices is various regions across the world and sometimes in our busy schedules we forget about holidays and say things like .. “Okay.. I’ll see you Monday.. ” until someone reminds us.. “You mean Tuesday.. Monday is a holiday”. “Well why doesn’t my calendar already tell me so?”. Simple ask.. but now how do I make it happen? Here is an approach I used to make this happen (If you do this, make sure you test with a few accounts  and gradually expand.. you are responsible for your own testing).

Pre-Requisites

  • Azure Account with Blob Storage
  • Azure Storage Explorer
  • Active Directory Attribute reserved to track which users already have the calendar for the current year imported (you can choose to use another database alternatively, but it is easier to query which users don’t have the calendar imported)
  • PST Files for each region (Only doing a single region in the script, but you can run it with a different list of Users + PST File each time). Avoid trying to do too much in a single script.

High Level Workflow:

  • Create a new PST in Outlook (one for each country that observes different holidays)
  • Setup Holiday Calendar in PST
  • Detach the PST from outlook
  • Upload to Azure Blob Storage
  • Get SAS URL and update the script
  • Get a list of users form AD that need to have the calendar imported into their mailbox
  • Create Mailbox Import Request job for each desired mailbox to import the PST from Blob storage into each of the mailboxes using the New-MailboxImportRequest Powershell cmdlet.
  • Monitor the Import Jobs and for the mailboxes where the job completes successful, update a custom attribute to the 4 digit year upto which they have holidays in their calendar – e.g. 2019. (This is used to find out who does not have the latest calendar imported)
  • Export Job Report and Remove Completed Jobs in Office 365
  • Troubleshoot the failed jobs

Get SAS URL

Login to storage Explorer with Account that has Azure Permissions we will be using a “Regular” Azure account with permissions to upload Blob Data to storage. We will be uploading the Holiday Calendar PST there. This allows us to get a SAS URL with an arbitary expiration date instead of one that expires in ~30 days in O365 Blob storage.

SASToken

This should give us a SAS URL that is valid till Dec 31st 2019, which is how long the calendar is valid. After that date the link will stop working, which is fine since the calendar has nothing of value past that date – we will bring in a new calendar each year. Anyone with the URL will be able to READ the PST file (no access to modify the PST), however it does not contain any sensitive data and is therefore safe to embed in scripts. Update the script with the URL, variables $st, $BLOBURI, $SignedURL and $year will need updating.

If you have used Microsoft Graph / Exchange REST or another mechanism to accomplish something similar, please do share. Here is the script

<span id="mce_SELREST_start" style="overflow:hidden;line-height:0;"></span>
#Holiday Calendar Import Script.. will find users matching a criteria in AD and import
$st = "?st=2018-xxxxxxxxxx"
$BlobURI = "https://example.blob.core.windows.net/myblob1/USHolidayCalendar.pst"
$SignedURL = "https://example.blob.core.windows.net/myblob1/USHolidayCalendar.pst?st=2018-10-12TZ&amp;se=2020-01-01T1xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
$year = "2019"

$DebugPreference= 'Continue'
Function Get-MDY{
    Get-Date -Format yyMMdd
}
Function Write-DebugLog{
PARAM(
    [string]$Message,
    [string]$file = "C:\scripts\holidaycalendar\importlog_$(Get-Mdy).log"
)
    $dt = Get-date -Format u
    "$dt $Message"| Out-File -FilePath $file -Append
    Write-Debug "$dt $Message"
}
$resultCSV = "C:\scripts\holidaycalendar\jobresults_prod_$(Get-Mdy).csv "
#This account should have O365 privileges to create import jobs
$cred = Get-Credential -Credential "some.account@domain.com"
#Clear out any old sessions
Get-PSSession | Remove-PSSession
$session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionURI https://ps.outlook.com/powershell/ -Credential $cred -Authentication Basic -AllowRedirection
Import-PSSession $session 

#Get Employees in US
$arr=@()
$employeeBaseDN = "OU=Employees,DC=domain,DC=com"
$customAttr = extensionAttribute1
$emp = Get-ADUser -SearchBase  -Filter `
     {(co -eq 'US') -and (Enabled -eq $true) -and (EmailAddress -like '*') -and ((employeeType -eq 'Employee') -or (employeeType -eq 'Intern'))} `
     -Properties EmailAddress, co, employeeType, extensionAttribute1 | where {$_.extensionAttribute1 -ne "$year"}
$arr = $emp

Write-Debuglog -Message "Starting Import for $($arr.count) Users"

#foreach US Employee Run Import

$count = 1
$sleepSecs = 30
$afterEvery = 50

foreach($obj in $arr){
    $mbx = $obj.EmailAddress
    #Throttle script speed.. sleep for $sleepSecs seconds after submitting $afterEvery jobs
    if(($count % 50)-eq 0){
        Start-Sleep -Seconds $sleepSecs
    }
    #If Email Address has value was found
    if($mbx){
        $jobname = "mailbox_calendar_import_$($obj.SamAccountName)"
        Write-DebugLog -Message "Checking if we need to run $jobname for $mbx"
        $jobItem = Get-MailboxImportRequest -name $jobname
        #If the job exists in a failed state, remove it - because we are going to try again
        if(Get-MailboxImportRequest -name $jobname){
        	Get-MailboxImportRequest | where {$_.name -eq $jobname -and $_.status -eq "Failed"} |
        	  Remove-MailboxImportRequest -confirm:$false
        }
        if(-not ($jobItem | where {$_.Status -ne 'Failed'})){
            Write-Debuglog -message "Starting $jobname"
            if($jobItem){$jobItem | Remove-MailboxImportRequest -confirm:$false}
            try{
                $mailbox = Get-Mailbox $mbx
                New-MailboxImportRequest -Name $jobname -Mailbox $($mailbox.Name) -BadItemLimit 250 `
                    -AzureBlobStorageAccountUri $BlobURI -AzureSharedAccessSignatureToken $st
                $count++
            } catch{
                Write-DebugLog -Message "Couldn't create $jobname for $mbx"
            }
        }else{
            #The job has already been run succesfully for this mailbox. Skip and mark the AD Attribute extensionAttribute1 as up to date for 2019.
            Write-Debuglog -Message "$jobname already in progress or completed.. not starting"

        }
    }
    else{
        Write-DebugLog -Message  "Mailbox not Found for $($obj.Name)"
    }

}

#Check status of import jobs .. this can take several hours to complete... O365 will do finish it with low priority
#Just keep running this loop until it finishes
#Powershell connection times out at times need to update to re-establish if necessary
#Use Import-PSSession -AllowClobber to allow bringing the new session in
Get-MailboxImportRequest  | where {$_.Name -match "mailbox_calendar_import" } | sort -Property Status | ogv

$pending = $true
#$VerbosePreference = 'Continue'
While($pending){
    $jobs = Get-MailboxImportRequest  | where {$_.Name -match "mailbox_calendar_import" }
    $isItDoneYet =  ($jobs | where {$_.status -ne "Completed" -and $_.status -ne "Failed"})
    $inProgress =  $jobs | where  {$_.Name -match "mailbox_calendar_import" -and $_.status -eq "InProgress"}
    $queued = $jobs | where  {$_.Name -match "mailbox_calendar_import" -and $_.status -eq "Queued"}
    $done = ($jobs | where  {$_.Name -match "mailbox_calendar_import" -and $_.status -eq "Completed"})
    $doneCount = ($done | Measure-Object).Count

    if($isItDoneYet){
        Write-Debuglog -Message  "$doneCount Done. Still working on $($isItDoneYet.Count) requests: $(($inProgress| Measure-Object).count) are in Progress and $($queued.Count) are queued"
        Start-Sleep -Seconds 300
    } else{
        Write-Debuglog -Message  "All Done. Completed $doneCount requests"
        $pending = $false
    }
}

#Export Job Status details for
$jobs | export-csv -NoTypeInformation -path $resultCSV 

#Foreach user where job is complete, lets go update AD and remove the job from O365
foreach($user in $arr){
    $jobname = "mailbox_calendar_import_$($user.SamAccountName)"
    $matchjob = $jobs | where {$_.name -eq $jobname -and ($_.status -eq 'Completed' -or $_.status -eq 'Failed')}
    if($matchJob -and $($matchjob.Status -eq 'Completed')){
        Write-Debuglog -Message "User: $($user.SamAccountName) Job: $jobname Status: $($matchjob.Status). Updating extensionAttribute1 in AD for $($user.SamAccountName)"
        Set-ADUser -Identity $($user.SamAccountName) -Replace @{extensionAttribute1 = "$year"}
        if(Remove-MailboxImportRequest -Identity $($matchjob.Identity) -Confirm:$false){
            Set-ADUser -Identity $($user.SamAccountName) -Replace @{extensionAttribute1 = "$year"}
            #Get-aduser $($user.SamAccountName) -Properties extensionAttribute1
        }
    } elseIf($matchJob -and $($matchjob.Status -eq 'Failed')){
        Write-Debuglog -Message "User: $($user.SamAccountName) Job: $jobname Status: $($matchjob.Status)"
        Write-Debuglog -Message "Exporting Diagnostic data to C:\scripts\holidaycalendar\logs\diagnostics\$($matchjob.Name)_$(Get-MDY).xml"
        #Export failure results to file for diagnostics
        $jobXML = Get-MailboxImportRequestStatistics -Identity $matchjob.identity
        $jobXML | Export-Clixml  "C:\scripts\holidaycalendar\logs\diagnostics\$($matchjob.Name)_$(Get-MDY).xml" -Force -Confirm:$false
        Remove-MailboxImportRequest -Identity $($matchjob.Identity) -Confirm:$false

    } else{
        Write-Debuglog -Message "User: $($user.SamAccountName) Job: Not Found"
    }
}

Using Certutil GUI to validate CRLs on all CDPs and using OCSP

I recently published an updated CRL for my offline root CA to AD as well as to the CDPs and wanted to verify that everything is working correctly. Of course you can use the command line version

certutil -verify filename.cer will validate it. But running certutil -URL https://foo will bring up a UI. Then clear out the URL, select a certificate issued by the CA you are trying to check the CRLs for and you can clear out the URL, or alternatively give a URL that has a certificate from the chain you are trying to validate

certutil_url

Reference: https://blogs.technet.microsoft.com/pki/2006/11/30/basic-crl-checking-with-certutil/

 

Parsing Certificates on sites using Python

I had a need to crawl things running on various ports on a corporate network, to see if they were running SSL. If they were running SSL, I need to see if the certificate is expired.

I had trouble doing the same in Powershell, which is my preferred scripting language, but something changed that won’t let me ignore certificate errors using what has worked for me in the past. So instead of digging around .Net libraries, I figured it was a good excuse to brush up on some python skills instead. Basically I just need to identify certificates being used by server processes (and related information like expiration, validity, etc).

The script will scan through host and port combos for devices that respond to ping, check if it uses SSL and dump out cert details in JSON format. It is currently hardcoded to dump the results into files in c:\data (one per port for all hosts/IPs in list) in a format index,host (I used Alexa Top 1m sites as input for testing).  The input file was formatted in number, site format as shown below. If you have a simple server list, you don’t need to use convertLines function. Sample Input:

1,google.com
2,youtube.com
3,facebook.com
4,baidu.com
5,wikipedia.org
6,qq.com
7,amazon.com
8,yahoo.com
9,taobao.com
10,tmall.com
11,twitter.com

I intend to add multi-threading potentially, here is the starter template though. Here is the multithreaded template. significantly improves performance over linear execution.

import ssl, socket, json, logging, sys, time
from urllib3.contrib import pyopenssl as reqs
from multiping import MultiPing
from concurrent import futures
import itertools
from itertools import islice
#input file
filename = "c:\\data\\top-2k.txt"
#per-port output file defined in loop further since I had trouble once the list grew too large
#control threading based on workload performance
maxthreads=100

"""Starting Function Definition"""
def convertLines(lines):
    #If CSV has a header we want to remove
    #head = lines[0]
    #del lines[0]
    infoDict = {}
    for line in lines: #Going through everything but the first line
        #infoDict[line.split(",")[0]] = [tuple(line.split(",")[1:])]
        arr = line.rstrip('\n').strip('\'').split(',')
        infoDict[arr[0]] = arr[1]
    return infoDict

def read_file(filename):
    thefile = open(filename, "r")
    lines = []
    for i in thefile:
        lines.append(i)
    thefile.close()
    return lines

def ping(host):
    mp = MultiPing([host])
    mp.send()
    try:
        responses, no_responses = mp.receive(1)
    except:
        return False
    if responses:
        return True
    else:
        return False

def test_connection(host,port,timeoutSecs=1):
    """Test tcp connectivity to port"""
    try:
        conn = socket.create_connection((host,port),timeoutSecs)
        conn.close()
        return True
    except socket.error as msg:
        return False

def split_range_list(num_range):
    result = []
    for item in num_range:
        if item.isdigit(): result.append(int(item))
        else:
            item=item.split("-")
            ml=list(range(int(item[0]),int(item[1])+1))
            result += (ml)
            #result.append(list(range(item.replace("-",","))))
    return result

def get_cert(server,port,timeout=1):
    current = (dict([('server', server), ('port', port)]))
    res = test_connection(server,port,timeout)
    if(res):
        print("Succesfully connected to {0} on port {1}".format(server,port))
        current['connected'] = True
        current['port'] = port
        try:
            pemcert = reqs.ssl.get_server_certificate((server, port))
            cert = reqs.OpenSSL.crypto.load_certificate(
                reqs.OpenSSL.crypto.FILETYPE_PEM,
                pemcert
            )
            current['isTLS'] = True
            current['subjectCN'] = cert.get_subject().CN
            current['subject'] = ("CN={0}, OU={1}, O={2}, L={3}, ST={4}, C={5}".format(cert.get_subject().CN, cert.get_subject().OU, cert.get_subject().O, cert.get_subject().L, cert.get_subject().ST, cert.get_subject().C))
            current['issuer'] = ("CN={0}, OU={1}, O={2}, L={3}, ST={4}, C={5}".format(cert.get_issuer().CN, cert.get_issuer().OU, cert.get_issuer().O, cert.get_issuer().L, cert.get_issuer().ST, cert.get_issuer().C))
            current['issuerCN'] = cert.get_issuer().CN
            current['thumbprint'] = cert.digest("SHA1").decode("utf-8")
            current['validTo'] = cert.get_notAfter().decode("utf-8")
            current['validFrom'] = cert.get_notBefore().decode("utf-8")
            current['subjectAltNames'] = reqs.get_subj_alt_name(cert)
        except (ssl.SSLError, TimeoutError) as msg:
            #print(msg)
            current['isTLS'] = False
        return current
    else:
        #print("Couldn't connect to {0} on port {1}".format(server,port))
        #current['connected'] = False
        pass

"""start main program"""
#ports to scan - this will result in a lot if files getting generated with one per port maybe refine the file logic later
portList = split_range_list(["443","636","3389","7000-9000"])
get_cert("annuaire-telechargement.com",8443,2)
start = time.time()
fullList = convertLines(read_file(filename))
#Just use the first 40 for testing
#mydict = dict(islice(fullList.items(),0,40))
mydict = fullList
resultArr = []
certArr = []
#prod = dict(map(list,itertools.product(list(mydict.values()), portList)))
#print(prod)
#time.sleep(30)
with futures.ThreadPoolExecutor(max_workers=maxthreads) as executor:
    # Start the load operations and mark each future with its host
    future_to_server = {executor.submit(ping, server): server for server in mydict.values()}
    for future in futures.as_completed(future_to_server):
        server = future_to_server[future]
        try:
            data = future.result()
        except Exception as exc:
            print('generated an exception in ping loop:')
        else:
            print('{} pingable is {}'.format(server,data))
        if(data):
            resultArr.append(server)

print("It took {} seconds".format(time.time() - start))
#Lets generate a host port combo array
#prod = dict(map(list,itertools.product(list(mydict.values()), portList)))

for port in portList:
    outfile = "c:\\data\\certs{}.txt".format(port)
    certArr = []
    start = time.time()
    with futures.ThreadPoolExecutor(max_workers=maxthreads) as executor:
        # Start the load operations and mark each future with its host
        future_to_cert = {executor.submit(get_cert, server,port): server for server in resultArr}
        for future in futures.as_completed(future_to_cert):
            server = future_to_cert[future]
            try:
                data = future.result()
            except Exception as exc:
                print("exception for {} in cert scan on port {}".format(server, port))
            if(data):
                certArr.append(data)
                print("success for {} got back{}".format(server,data))
    print("It took {} seconds to get certs for port {}".format(time.time() - start),port)
    if(certArr):
        with open(outfile, 'w') as f:
          json.dump(certArr, f, ensure_ascii=True, indent=4)        

[/<span data-mce-type="bookmark" id="mce_SELREST_start" data-mce-style="overflow:hidden;line-height:0" style="overflow:hidden;line-height:0;">&#65279;</span>sourcecode]

Here is the first version I had originally ported that didn't scale as well since it was linear.

<pre>
import ssl, socket, json, sys, time
from urllib3.contrib import pyopenssl as reqs
from multiping import MultiPing

#socket and pyopenssl from urllib3 are core libs here for functionality
#multiping is to not waste time checking multiple ports on a server that is down
#json is to export results to a file in JSON format
#First ping a server and check if it is up
#IF ping responds, check the port on the server if it responds to TCP socket connection
#If TCP connects, try connecting with SSL and get certificate details if succesful
#store results in an array for entries where we at least got a TCP connection to the port
#save it in a JSON doc for processing later
resultArr = []
hostList = ['www.google.com',"www.microsoft.com", "192.168.2.46"]
portList = [8443,443,80,8080]

def test_connection(host,port,timeoutSecs=1):
    try:
        conn = socket.create_connection((host,port),timeoutSecs)
        conn.close()
        return True
    except socket.error as msg:
        return False

def https_cert(host, port):
    x509 = reqs.OpenSSL.crypto.load_certificate(
        reqs.OpenSSL.crypto.FILETYPE_PEM,
        reqs.ssl.get_server_certificate((host, port))
    )
    return x509

def ping(host):
    mp = MultiPing([host])
    mp.send()
    responses, no_responses = mp.receive(1)
    if responses:
        return True
    else:
        return False

#I'd like server, port, SSL(TRUE/FALSE),certFields, TCP_CONNECTED
loopstart = time.time()
print("Starting loop at {0}".format(loopstart))
for server in hostList:
    serverstart = time.time()
    pinging = ping(server)
    if(pinging):
        for port in portList:
            current = (dict([('server', server), ('port', port)]))
            res = test_connection(server,port,1)
            if(res):
                print("Succesfully connected to {0} on port {1}".format(server,port))
                current['connected'] = True
                try:
                    pemcert = reqs.ssl.get_server_certificate((server, port))
                    cert = reqs.OpenSSL.crypto.load_certificate(
                        reqs.OpenSSL.crypto.FILETYPE_PEM,
                        pemcert
                    )
                    current['isTLS'] = True
                    current['subjectCN'] = cert.get_subject().CN
                    current['subject'] = ("CN={0}, OU={1}, O={2}, L={3}, ST={4}, C={5}".format(cert.get_subject().CN, cert.get_subject().OU, cert.get_subject().O, cert.get_subject().L, cert.get_subject().ST, cert.get_subject().C))
                    current['issuer'] = ("CN={0}, OU={1}, O={2}, L={3}, ST={4}, C={5}".format(cert.get_issuer().CN, cert.get_issuer().OU, cert.get_issuer().O, cert.get_issuer().L, cert.get_issuer().ST, cert.get_issuer().C))
                    current['issuerCN'] = cert.get_issuer().CN
                    current['thumbprint'] = cert.digest("SHA1").decode("utf-8")
                    current['validTo'] = cert.get_notAfter().decode("utf-8")
                    current['validFrom'] = cert.get_notBefore().decode("utf-8")
                    #current['subjectAltNames'] = reqs.get_subj_alt_name(cert)
                except ssl.SSLError as msg:
                    #print(msg)
                    current['isTLS'] = False
                resultArr.append(current)
            else:
                print("Couldn't connect to {0} on port {1}".format(server,port))
                current['connected'] = False
        serverend = time.time()
        serverLoopTime = serverend - serverstart
        print("Ending loop for {0} Total time for all ports {1:.3f} seconds".format(server,serverLoopTime))
endloop = time.time()
fullLoopTime = endloop - loopstart
print("Ending loop at {0} Total time for all entries {1:.3f} seconds".format(endloop, fullLoopTime))
with open('C:\data\data.txt', 'w') as f:
  json.dump(resultArr, f, ensure_ascii=False, indent=4)
#uncomment line below to view results as JSON on console
#print(json.dumps(resultArr, indent = 4))

Querying, managing and updating Windows Licensing with Powershell

A lot of people build their images with MAK keys, and as they grow, they want to switch over to using KMS keys aka Volume License Keys.  There are other scenarios that I have come across when you have the right keys, but for some reason or another, the license has not activated, and the only way you find out is that your machine starts shutting down every hour.

First thing to check of course is that the KMS SRV record exists in DNS and is pointing to the correct KMS server

nslookup -type=srv _vlmcs._tcp

I happened to run into a scenario, where a MAK key that was being used in the image, but it had gone past it’s maximum activation limit. So I decided to write a little script to find out which machines were impacted (weren’t activated and what key they were using (it only gives you partial key, but helpful in looking it up).


$dt = (Get-Date).AddDays(-45)
#Get Servers from a server OU that have been recently active and updated their password in AD in lst 45 days
$allservers = Get-ADComputer -SearchBase "OU=Servers,DC=ad,DC=example,DC=com" -Filter * -Properties PasswordLastSet, DNSHostName, IPV4Address | where {$_.PasswordLastSet -gt $dt}
#ping them to see if they are at least reachable. Not a while lot we can do here to deal with WinRM issues
$upservers = $allservers | where {(Test-Connection -ComputerName $_.DNSHostName -Count 1 -Quiet) -eq $true }
# Try to connect to each of the servers and get the information. This will take some time, get a coffee
$licinfo= $upservers | %{ Get-CimInstance -ComputerName $_.Name -ClassName SoftwareLicensingProduct -Filter "ApplicationID = '55c92734-d682-4d71-983e-d6ec3f16059f'" -OperationTimeoutSec 5 |
where {$_.PartialProductKey -ne $null} }
#Gather info and put it in a CSV to share out findings with team members if needed
$licinfo | select -Property PSComputername, ProductKeyChannel, LicenseStatus, PartialProductKey, Name, KeyManagementServiceMachine, GenuineStatus, VLActivationType |
where {$_.PSCOMputerName} | sort -Property LicenseStatus -Descending | Export-CSV -path c:\work\licenses.csv -NoTypeInformation

#now let's see if we can go fix some of these. find the ones with licenses that are not activated, group them by OS (since keys are OS Flavor Specific, and then activate)
$missing = $licinfo | where {$_.LicenseStatus -gt 1} |
select -Property PSComputername, ProductKeyChannel, LicenseStatus, PartialProductKey, Name, KeyManagementServiceMachine, GenuineStatus, VLActivationType
# in case you want to visuall see the list of servers.. uncomment the line below
# $missing | ft
$key2012 = "11111-11111-11111-11111-11111"
$key2016 = "22222-22222-22222-22222-22222"
$kmsServer = "kms.ad.example.com:1688"
$s2012 = $missing | where {(Get-ADComputer -Identity $_.PSComputerName -Properties OperatingSystem, OperatingSystemVersion).OperatingSystem -match "Windows Server 2012 R2" }
$s2016 = $missing | where {(Get-ADComputer -Identity $_.PSComputerName -Properties OperatingSystem, OperatingSystemVersion).OperatingSystem -match "Windows Server 2016" }
#Fix the 2012s. On each computer update license key, manually set KMS server if necessary and run activation script
$s2012| % { $_.PScomputername; Invoke-Command -ComputerName $_.PScomputername -ScriptBlock `
{ & cscript /H:WScript C:\Windows\System32\slmgr.vbs -ipk $key2012; & cscript /H:WScript C:\Windows\System32\slmgr.vbs -skms $kmsServer; & cscript /H:WScript C:\Windows\System32\slmgr.vbs -ato }}

#Fix the 2016s. On each computer update license key, manually set KMS server if necessary and run activation script
$s2016| % { $_.PScomputername; Invoke-Command -ComputerName $_.PScomputername -ScriptBlock `
{ & cscript /H:WScript C:\Windows\System32\slmgr.vbs -ipk $key2016; & cscript /H:WScript C:\Windows\System32\slmgr.vbs -skms $kmsServer; & cscript /H:WScript C:\Windows\System32\slmgr.vbs -ato }}

#After running the script to fix, make sure that the status shows as good
$licinfoNEW = $s2012 | %{ Get-CimInstance -ComputerName $_.PSComputerName -ClassName SoftwareLicensingProduct -Filter "ApplicationID = '55c92734-d682-4d71-983e-d6ec3f16059f'" -OperationTimeoutSec 5 |
where {$_.PartialProductKey -ne $null} }

$licinfoNEW += $s2016 | %{ Get-CimInstance -ComputerName $_.PSComputerName -ClassName SoftwareLicensingProduct -Filter "ApplicationID = '55c92734-d682-4d71-983e-d6ec3f16059f'" -OperationTimeoutSec 5 |
where {$_.PartialProductKey -ne $null} }

#Display the information on the screen .. or export it to CSV if you choose
$licinfoNEW |
select -Property PSComputername, ProductKeyChannel, LicenseStatus, PartialProductKey, Name, KeyManagementServiceMachine, GenuineStatus, VLActivationType | ft

This is of course a script I had to whip up on the spot for a specific issue, I would love to hear any use cases that people might have that are related