Tuesday, February 11, 2020

Find name of AD group after it has been deleted but you have the SID

So you have the SID of a deleted group, but you want to know its name and other details. You can get this information provided the object is still present in the Active Directory Recycle Bin (assuming you have that enabled in your domain).

That all being said, here is the PowerShell you need:

get-adobject -Filter 'isdeleted -eq $true -and name -ne "Deleted Objects" -and objectSID -like "Enter SID here"' -IncludeDeletedObjects -Properties samaccountname,displayname,objectsid

Example

get-adobject -Filter 'isdeleted -eq $true -and name -ne "Deleted Objects" -and objectSID -like "S-1-5-21-1601936709-1892662786-3840804712-315762"' -IncludeDeletedObjects -Properties samaccountname,displayname,objectsid

Cheers!

Wednesday, February 5, 2020

Azure Domain Controller Glue Record Gets Deleted

This somewhat of a corner case, but if it happened to us then it could happen to others. I also regard this as a bug. Losing a Doman Controller's glue record can have a profoundly negative impact on the functionality of an Active Directory Domain. This a scenario where this will happen.
  1. Domain Controller is an Azure IAS VM.
  2. The DNS zone for the domain has dynamic updates set to 'Nonsecure & Secure'.
The chain of events:
  1. Because the Domain Controller is in Azure, it cannot have a genuinely static IP address within the OS. You have to set the Azure NIC settings to a 'Static IP,' which is actually under the covers' a DHCP reservation within the Azure DHCP system. In any case, the OS believes it has a dynamic address because that what the NIC tells it. That is why you have to click past the warnings about dynamic IPs when the server was promoted.
  2. Since the OS believes its IP is dynamic, the glue record it creates is also dynamic because it thinks it may have to change the value if the NIC gets a new address.
  3. Assuming that DNS scavenging is enabled
  4. Because the zones set to 'Nonsecure & Secure,' the DHCP server is responsible for the renewal of the server's DNS record when it's DHCP lease is 50% expired.
  5. Azure DHCP leases are hardcoded at 136 years, the default scavenging period is 7 days.
Of course, this issue is not just a problem for Domain Controllers it will affect all member servers that are IAS VM's in a DNS zone that allows nonsecure dynamic updates.

Get those DNS zones set to Secure!

Cheers

Saturday, February 1, 2020

Mac Time Machine Logs

If you need to view your MacOS Time Machine Logs:

Open Terminal and use:

log show --style syslog --predicate 'senderImagePath contains[cd] "TimeMachine"' --info

of to see them live stream (tail)

log stream --style syslog --predicate 'senderImagePath contains[cd] "TimeMachine"' --info

If you want to only see errors you can always add:

| grep 'error'

Cheers

Sunday, December 29, 2019

Mac OS Catalina "Can't Be Opened Because Apple cannot Check it for Malicious Software"

This message is appearing more and more when trying to open an application on Mac OS Catalina. My understanding is that the message relates to the notarization of software. I'm going to keep this brief because I am not talking about the following common issues which clouds research into this error.

  • You are attempting to run a 32 bit application.
  • You are attempting to run a very old application.
  • You are attempting to run an application that has not followed Apple's publishing guideline.
In the above scenarios you can attempt to get an update from the vendor. You can try going to:

System Preferences | Security & Privacy | General

Look for a 'Run Anyway' option.

I'm talking about recent application where all of the above fails. This is what you can do:

I take no responsibility if you execute software that damages your system.

Got to a command terminal prompt. Type:

sudo spctl --master-disable

Run your application

Type:

sudo spctl --master-enable.

Cheers!

Monday, December 23, 2019

Custom Azure RBAC Roles (Step by Step)

Azure provides you the ability to create custom RBAC roles, and the process bears no resemblance to any comparable process in Active Directory. As is typical for me, I am going to explain by example.

The IT department that I work for has a dedicated process for decommissioning servers, and on premise this is a very mature process which looks something like this:
  • Switch off the VM for a week and see if anyone screams
  • Delete the VM and associated disk
  • Delete DNS records
  • Revoke certificates
  • etc.
You get the picture. So in order the harmonize that process for Azure VMs we attempted to replicate the process and quickly discovered that the personnel that have been assigned the right 'Virtual Machine Contributor' do not have the ability to delete the associated disks. This is reasonable because unlike VMWare, the disks within a VM in Azure are totally separate objects and totally separate object types. What is not reasonable, and in my opinion stupid, is that there is no inbuilt role with Azure to allow that. So here we go:
  • For education you can start by grabbing a role that you want to expand upon. In this case it makes sense to start with 'Virtual Machine Contributor'. You could start from scratch, but more of that later. So lets run some PowerShell (as always, excuse my line-wrap)
Get-AZRoleDefinition  -Name "Virtual Machine Contributor" | ConvertTo-Json | Out-File "C:\Temp\Virtual Machine Contributor.json"

This will create a file that looks like the following. For clarity I am highlighting the items we will be modifying.

{
    "Name": "Virtual Machine Contributor",
    "Id": "9980e02c-c2be-4d73-94e8-173b1dc7cf3c",
    "IsCustom": false,
    "Description": "Lets you manage virtual machines, but not access to them, and not the virtual network or storage account they're connected to.",
    "Actions": [
        "Microsoft.Authorization/*/read",
        "Microsoft.Compute/availabilitySets/*",
        "Microsoft.Compute/locations/*",
        "Microsoft.Compute/virtualMachines/*",
        "Microsoft.Compute/virtualMachineScaleSets/*",
        "Microsoft.DevTestLab/schedules/*",
        "Microsoft.Insights/alertRules/*",
        "Microsoft.Network/applicationGateways/backendAddressPools/join/action",
        "Microsoft.Network/loadBalancers/backendAddressPools/join/action",
        "Microsoft.Network/loadBalancers/inboundNatPools/join/action",
        "Microsoft.Network/loadBalancers/inboundNatRules/join/action",
        "Microsoft.Network/loadBalancers/probes/join/action",
        "Microsoft.Network/loadBalancers/read",
        "Microsoft.Network/locations/*",
        "Microsoft.Network/networkInterfaces/*",
        "Microsoft.Network/networkSecurityGroups/join/action",
        "Microsoft.Network/networkSecurityGroups/read",
        "Microsoft.Network/publicIPAddresses/join/action",
        "Microsoft.Network/publicIPAddresses/read",
        "Microsoft.Network/virtualNetworks/read",
        "Microsoft.Network/virtualNetworks/subnets/join/action",
        "Microsoft.RecoveryServices/locations/*",
       "Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/*/read",
        "Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/read",
        "Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/write",
        "Microsoft.RecoveryServices/Vaults/backupFabrics/backupProtectionIntent/write",
        "Microsoft.RecoveryServices/Vaults/backupPolicies/read",
        "Microsoft.RecoveryServices/Vaults/backupPolicies/write",
        "Microsoft.RecoveryServices/Vaults/read",
        "Microsoft.RecoveryServices/Vaults/usages/read",
        "Microsoft.RecoveryServices/Vaults/write",
        "Microsoft.ResourceHealth/availabilityStatuses/read",
        "Microsoft.Resources/deployments/*",
        "Microsoft.Resources/subscriptions/resourceGroups/read",
        "Microsoft.Storage/storageAccounts/listKeys/action",
        "Microsoft.Storage/storageAccounts/read",
        "Microsoft.Support/*"
    ],
    "NotActions": [],
    "AssignableScopes": [
        "/"
    ]
}
  • So now we need to start the modifications. The first step is to provide the name. I strongly suggest the following format: so in my case:
"Name": "SLHS-Virtual Machine Contributor v1.0",
  • Don't forget the JSON comma !
  • Next DELETE the whole ID field. When we create the custom role, Azure will assign a fresh ID for us. If you forget this step, the process will try to over-right the existing role which (a) would be bad but (b) would fail.
  • Next we change the 'IsCustom' field.
"IsCustom": true,
  • Next we change the description. In my case I chose:
"Description": "Lets you manage virtual machines, including the deletion of disks.",
  • OK now the fun part, we need to add a line to provide the access we want. For this we need to turn to the master recipe list which is provided here:
https://docs.microsoft.com/en-us/azure/role-based-access-control/resource-provider-operations
  • This is the list of  'Resource Provider Operations', a compendium of all available rights. You kind of need to know what to search for, but for this purpose we need to be looking at 'Microsoft.Compute/disks'. If you search the library page for that you will see entries like 'Microsoft.Compute/disks/read', 'Microsoft.Compute/disks/write' and of course 'Microsoft.Compute/disks/delete'. At this point we can talk a little about structure. You can wildcard each element after the slash, so for example 'Microsoft.Compute/disks/delete' will allow VM disk deletion, but 'Microsoft/disks/*' will allow all actions including delete.
  • So lets run with that and insert that line into our JSON code.
  • Now here's the stupid part. The 'AssignableScopes' line. I would argue that if you want to create a custom you would want the ability to assign that role to anyone on any object in any subscription. But for custom roles (at the time of me writing this (December 2019) you cannot wildcard the subscription or assign it the tenant root. You must specify a specific subscription. Will show how to create a workaround for this later, but for now I am going to specify a specific subscription. So the resultant edited JSON file end up looking like this (again I have highlighted the delta, and remember we removed the ID line):
{

    "Name": "SLHS-Virtual Machine Contributor v2.0",
    "IsCustom": true,
    "Description": "Lets you manage virtual machines, including the deletion of disks.",
    "Actions": [
        "Microsoft.Authorization/*/read",
        "Microsoft.Compute/availabilitySets/*",
        "Microsoft.Compute/locations/*",
        "Microsoft.Compute/virtualMachines/*",
        "Microsoft.Compute/virtualMachineScaleSets/*",
        "Microsoft.Compute/disks/delete",
        "Microsoft.DevTestLab/schedules/*",
        "Microsoft.Insights/alertRules/*",
        "Microsoft.Network/applicationGateways/backendAddressPools/join/action",
        "Microsoft.Network/loadBalancers/backendAddressPools/join/action",
        "Microsoft.Network/loadBalancers/inboundNatPools/join/action",
        "Microsoft.Network/loadBalancers/inboundNatRules/join/action",
        "Microsoft.Network/loadBalancers/probes/join/action",
        "Microsoft.Network/loadBalancers/read",
        "Microsoft.Network/locations/*",
        "Microsoft.Network/networkInterfaces/*",
        "Microsoft.Network/networkSecurityGroups/join/action",
        "Microsoft.Network/networkSecurityGroups/read",
        "Microsoft.Network/publicIPAddresses/join/action",
        "Microsoft.Network/publicIPAddresses/read",
        "Microsoft.Network/virtualNetworks/read",
        "Microsoft.Network/virtualNetworks/subnets/join/action",
        "Microsoft.RecoveryServices/locations/*",
       "Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/*/read",
        "Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/read",
        "Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/write",
        "Microsoft.RecoveryServices/Vaults/backupFabrics/backupProtectionIntent/write",
        "Microsoft.RecoveryServices/Vaults/backupPolicies/read",
        "Microsoft.RecoveryServices/Vaults/backupPolicies/write",
        "Microsoft.RecoveryServices/Vaults/read",
        "Microsoft.RecoveryServices/Vaults/usages/read",
        "Microsoft.RecoveryServices/Vaults/write",
        "Microsoft.ResourceHealth/availabilityStatuses/read",
        "Microsoft.Resources/deployments/*",
        "Microsoft.Resources/subscriptions/resourceGroups/read",
        "Microsoft.Storage/storageAccounts/listKeys/action",
        "Microsoft.Storage/storageAccounts/read",
        "Microsoft.Support/*"
    ],
    "NotActions": [],
    "AssignableScopes":  [
                             "/subscriptions/4a5ce960-87d4-431b-ac1c-67a70cb1516e"
                                       ]
}

  • So save your work as something like C:\Temp\SLHS-Virtual Machine Contributor v2.0.json".
  • Next we create the new role using PowerShell:
New-AZRoleDefinition  -InputFile  "C:\Temp\SLHS-Virtual Machine Contributor v2.0.json"
  • If you are successful then you will be presented with some output that describes your newly created role:

Name             : SLHS-VM Contributor v2.0
Id               : 4c428c4d-34f9-4e15-9776-2c04ef26f4a3
IsCustom         : True
Description      : Lets you manage virtual machines, including the deletion of disks.
Actions          : {Microsoft.Authorization/*/read, Microsoft.Compute/availabilitySets/*,
                   Microsoft.Compute/locations/*, Microsoft.Compute/virtualMachines/*...}
NotActions       : {}
DataActions      : {}
NotDataActions   : {}
AssignableScopes : {/subscriptions/4a5ce960-87d4-431b-ac1c-67a70cb1516e}
  • That's it for the basic process. You can now assign that role (in this screenshot its v4.0 not v2.0 but you get the idea.


Now we have to deal with the single subscription malarkey. For this example we are going to start from scratch and create a role specifically for the task at hand (deleting VM disks). Essentially what we need is a script that takes a base name for our role, in the following example "SLHS-VMDiskDestroyer-v1.0", a description "Allows holder to delete VM disks", the RBAC Role from the Microsoft dictionary "Micrsoft.Compute/disks/delete" and the name of a pre-created (wait for the sync) AD group "CustomRBAC-VMDiskDestroyers-U_GG_IA"

When the script runs it will cycle through every subscription, add the role using the base name + the subscription GUID (each role in the tenant must have a unique name) and assign the role to the specified AD group. As with any script that does something to every subscription - take care!


# Will add custom role to all subscriptions
# Complete the three top variables if you have cloned this script.
#######################################################
# Constants for easy cloning of script
$BaseRBACName = "SLHS-VMDiskDestroyer-v1.0"
$Desc.        = "Allows holder to delete VM disks"
$Role         = "Microsoft.Compute/disks/delete"
$ADGroup      = "CustomRBAC-VMDiskDestroyers-U_GG_IA"
#######################################################

$JSONName = $RBACName + ".json"
If($JSONName -Like "* *")
{
  Write-Host "Error: JSONName must contain no spaces"
  Exit
}
Get-AZSubscription | ForEach-Object `
{
  $SubID = $_.ID
  $SubName = $_.Name
  $RBACName = $BaseRBACName + "-" + $SubID
  If(Test-Path "c:\temp\$JSONName")
  {
    Remove-Item "c:\temp\$JSONName" -Force
  }
  # Make JSON file
  Add-Content "c:\temp\$JSONName" "{"
  Add-Content "c:\temp\$JSONName" " `"Name`": `"$RBACName`","
  Add-Content "c:\temp\$JSONName" " `"IsCustom`": true,"
  Add-Content "c:\temp\$JSONName" " `"Description`": `"$Desc`","
  Add-Content "c:\temp\$JSONName" " `"Actions`": ["
  Add-Content "c:\temp\$JSONName" " `"$Role`""
  Add-Content "c:\temp\$JSONName" "  ],"
  Add-Content "c:\temp\$JSONName" " `"NotActions`": [],"
  Add-Content "c:\temp\$JSONName" " `"AssignableScopes`": ["
  Add-Content "c:\temp\$JSONName" " `"/subscriptions/$SubID`""
  Add-Content "c:\temp\$JSONName" "   ]"
  Add-Content "c:\temp\$JSONName" "}"
  Write-Host "Adding role definition to $SubName"
  Try
  {
    $RoleObj = New-AZRoleDefinition -InputFile "c:\temp\$JSONName" -ErrorAction Stop
  }
  Catch
  {
    Write-Host "Did not add role definition, probably already exists" -ForegroundColor Yellow
  }
  # Add AD group
  $GroupID = (Get-AzADGroup -SearchString $ADGroup).ID
  $SubScope = "/subscriptions/$SubID"
  Write-Host "Adding $ADGroup to $RBACName"
  Try
  {
    New-AZRoleAssignment -ObjectID $GroupID -RoleDefinitionName $RBACName -Scope $SubScope -ErrorAction Stop
  }
  Catch
  {
    Write-Host "Did not assign role to group, group already has that role" -ForegroundColor Yellow
  }
  Write-Host "`n`n"
}
Cheers!






Tuesday, October 1, 2019

Using an old iPod 3rd Gen on a modern Mac

I have seen a lot of tears flowing in different forums from people that cannot get their original iPod 3rd generation on a modern Mac. I recently decided to resurrect mine for nostalgia. Here is the summary.

Battery is screwed
I highly recommend the following hit from iFixIT and they have good instructions. However, are you sure your battery is dead? See section on 'cannot charge'.

https://www.ifixit.com/Store/iPod/iPod-3G-Replacement-Battery/IF192-015?o=3


Cannot charge
The first thing to realize about this iPod is that you cannot charge by USB. Don't listen to anyone tell you otherwise. You MUST change by Firewire. So, you have two choices. You can either search ebay for a Firewire charger or since you will want to sync your iPod with your Mac, you can follow the last section here for syncing, successfully attaching the iPod to your Mac for syncing will also allow your Mac to change your iPod.








Note the Firewire port, not a USB port.








Cannot sync
This is where things are going to cost you a little money. Modern Macs have USB-C ports. A USB-C port is also called a ThunderBolt 3 port. Here is what you need:

1 - Purchase a USB-C/ThunderBolt 3 to ThunderBolt 2 adapter.
Apple Part Number A1790. At time of writing it costs $49 from Apple, less on ebay.
Thunderbolt 3 to Thunderbolt 2

2 - Purchase a ThunderBolt 2 to Firewire 800 Adapter
Apple Part Number A1463. At time of writing it costs $29 from Apple. less on ebay.
ThunderBolt 2 to Firewire 800

3 - Purchase a Firewire 800 to Firewire 400 adapter
Elago Part Number EL-FW-ADAP. At time of writing it costs $9.99 from Amazon
Firewire 800 to Firewire 400

4 - Purchase a Firewire 400 to 30 pin iPod cable.
These are as rare as rocking horse poop. You may already have one. If you have trouble locating one, you might consider buying one of the charging adapters as described above because they should really come complete with this cable.
















  • Connect the USB-C--TB2 adapter to you Mac.
  • Connect the TB2--FW800 adapter to the USB-C---TB2 adapter
  • Connect the FW800---FW400 adapter to your (hopefully existing) iPod FW400 to iPod 30 pin iPod cable.
  • Connect the FW400--iPod 30 pin cable to your iPod.
Other matters
If you, like me,  purchased the iPod when you were using Windows. Your first step will be to restore the iPod to factory defaults with iTunes.

Cheers!











Wednesday, June 12, 2019

LM Hash

LM Hash

LM Hashes are weak and archaic, an LM hash does not use a salt, and therefore any identical passwords will have identical hash values. Additionally, the LM hash doesn't process the password as a whole. Instead, it null-pads it to 14 characters (if needed), then splits that value into 7-character chunks and hashes each before sticking them back together. Thus, if the first 7 characters are identical to the last 7, the first 8 bytes of the LM hash will match the last 8.

Example
The LM hash value for 7 null characters is AAD3B435B51404EE. Therefore a password less than 8 characters long will end with AAD3B435B51404EE, and an empty password will always (since LM hashing doesn't use salt) be exactly AAD3B435B51404EEAAD3B435B51404EE.

Also
There is one more caveat, however. LM hashing does not at all support passwords of 15 characters or greater. When this is encountered, the user may receive a prompt asking them to confirm they want to use a password that will be incompatible with older (LM hash dependent) software. Then, the system will store a null LM hash for that user. I personally recommend that people use 15+ characters in their passwords for precisely this reason.

It is recommended that LM Hashes are disabled thus (this course can also be a GPO)

  • Locate and then click the following key:
  • HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa
  • On the Edit menu, click Add Key, type NoLMHash,, set the value to 1
  • Quit Registry Editor.
  • Restart the computer, and then change your password to make the setting active.

Salt
In cryptography, a salt is random data that is used as an additional input to a one-way function that "hashes" data, a password or passphrase. Salts are used to safeguard passwords in storage. Historically a password was stored in plaintext on a system, but over time additional safeguards developed to protect a user's password against being read from the system. A salt is one of those methods.

A new salt is randomly generated for each password. In a typical setting, the salt and the password (or its version after Key stretching) are concatenated and processed with a cryptographic hash function, and the resulting output (but not the original password) is stored with the salt in a database. Hashing allows for later authentication without keeping and therefore risking the plaintext password in the event that the authentication data store is compromised.

Salts defend against dictionary attacks or against their hashed equivalent, a pre-computed rainbow table attack. Since salts do not have to be memorized by humans they can make the size of the rainbow table required for a successful attack prohibitively large without placing a burden on the users. Since salts are different in each case, they also protect commonly used passwords, or those users who use the same password on several sites, by making all salted hash instances for the same password different from each other.

Cheers

Azure Subscription and Registering Resources

There is an issue with Azure Role Based access (RBAC). Depending on who you listen to, this is a bug (I agree, it is a bug).

Scenario

  • You create a subscription and a resource group within it.
  • You assign someone as a contributor of the RG but not a contributor to the subscription (for a long list of reasons.
  • The user cannot create a resource in the resource group, the wizard complains about the subscription not having permissions to register the resource provider.

Reason
When one attempts to create a resource two things must be true (1) the user must have the correct permissions (they do, contributor to the resource group is more than sufficient) (2) The subscription impersonates the user using their access rights to register the resource type (e.g. a Virtual Machine, Disk, Whatever) as an allowable type of resource within the subscription (this is what fails).

It’s a bug because:
The Wizard should be registering the resource type at the RG level not the subscription level.

Solution
When you create a new subscription for a team, you need to pre-register the resource types as being allowable.
Logon to your tenant in PowerShell
Select the relevant subscription thus:

Select-AZSubscription “MyCoolSubscriptionName"

Register the resource types thus:

Get-AzResourceProvider -ListAvailable  |  ForEach-Object{Register-AzResourceProvider -ProviderNameSpace $_.ProviderNameSpace}

Cheers!

Friday, June 7, 2019

WebSSOLifetime versus TokenLifetime



What is the difference between WebSSOLifetime versus TokenLifetime

The trick to understanding this is to think of WebSSOLifetime like a Kerberos TGT.

WebSSOLifetime (Default 480 minutes = 8 hours)
This parameter is server-wide. Meaning if you configure it, it’s active for all of the ADFS relying parties. Whenever a user asks a token for a given RP he will have to authenticate to the ADFS service first. Upon communicating with the ADFS service he will receive two tokens: a token which proves who he is (let’s call that the ADFS Token) and a token for the RP (let’s say the RP Token). All in all this seems very much like the TGT and TGS tickets of Kerberos.
Now the WebSSOLifetime timeout determines how long the ADFS token can be used to request new RP Tokens without having to re-authenticate. In other words a user can ask new tokens for this RP, or for other RP’s, and he will not have to prove who he is until the WebSSOLifetime expires the ADFS token.

TokenLifetime (Default 0 which means 10 hours!)
The TokeLifetime is now easy to explain. This parameter is configurable for each RP. Whenever a user receives a RP Token, it will expire at some time. At that time the user will have to go to the ADFS server again an request a new RP token. Depending on whether or not the ADFS Token is still valid or not, he will not have to re-authenticate. 
One argument to lower the TokenLifetime could be that you want the claims to be updated faster. With the default whenever some of the Attribute Store info is modified, it might potentially take 10 hours before this change reaches the user in its claims.

The TokenLifetime can be read using PowerShell

PS > Get-ADFSRelyingPartyTrust -Name "relying_party"

The WebSSOLifetime can be accessed from the ADFS management interface

Cheers!

Monday, January 14, 2019

Filter Out Events in Windows Event Logs

Did you know you can filter *out* events in the Windows Event Logs by Event ID? Just open the 'Filter Current Log" like you usually would and put a minus in front of the event you want to hide.


Cheers

Wednesday, June 27, 2018

Find DN for AD Integrated Forest DNS record

With ADUC it is easy to find the distinguished name of an AD object. DNS records are a little more hidden. Here would be an example:


dc=ServerName,DC=MyDomain.org,CN=MicrosoftDNS,DC=ForestDnsZones,DC=STLUKES-INT,DC=ORG

Note the weirdness, the first two sections:

dc=ServerName,dc=FullDomainName combines to make an FQDN and yet section two would normally be broken up. Say you have a parent domain and a child domain. Normally a DN would look something like this

cn=ServerName,dc=ChildDomain,dc=ChildDomain,dc=Org

but for this we have

dc=ServerName,dc=ChildDomain.ParentDomain.Org

Weird!

Also if you want to look at application metadata, don't forget to include the name of a domain controller that belongs to the same domain as the machine you are running this command from:

repadmin /showobjmeta  sl1dc1 dc=xxsql01,dc=sl2.stfreds-int.org,cn=MicrosoftDNS,DC=ForestDnsZones,DC=stfreds-int,dc=org


Cheers!

Friday, June 1, 2018

Windows 2008 RTM Network Performance Tuning

Windows 2008 RTM Network Performance Tuning:
These NIC options are collectively known as the "TCP Chimney". Originally these options were designed to alleviate a servers compute CPU(s) from some of the stress of networking by offloading some functionality to the CPU on the NIC itself. Circa. 10 years ago, this caused issues because the NIC vendors made a bad job of leveraging the Microsoft APIs. In more recent times, this improved but the Microsoft APIs themselves were poorly implemented on Windows 2008 RTM (Vista kernel). I am not sure I would recommend baking the following recommendations in to a build image, but certainly for troubleshooting poor performance or dropped packets, these parameters can be useful. Note:


  • You may need to experiment.
  • These specifically worded parameters apply to the VMWare VMXNet3 NIC but should be found on all NICS.
  • Offloading networking traffic to the compute CPU assumes that the compute CPU is powerful enough

IPv4 Checksum Offload
When data comes in through a network, the data is checked against a checksum (or validation code) in the headers in the packets it was delivered in. If the data and checksum don't match, the packet is determined to be bad and has to be retransmitted. To speed things up, some network cards can "offload" the checksumming, i.e., perform the checksumming on the network card itself, rather than leave the job to the CPU. This frees up the CPU to do that much more work on its own, and on a server with extremely high network throughput, that much CPU savings can add up.
Recommendation: Disable

IPv4 TSO Offload
Using TSO and LRO on physical and virtual machine NICs improves the performance of ESX/ESXi hosts by reducing the CPU overhead for TCP/IP network operations. The host uses more CPU cycles to run applications. If TSO is enabled on the transmission path, the NIC divides larger data chunks into TCP segments. If TSO is disabled, the CPU performs segmentation for TCP/IP.
Note: TSO is referred to as LSO (Large Segment Offload or Large Send Offload) in the latest VMXNET3 driver attributes.
Recommendation: Disable

Large Send Offload V2 (IPv4)
Is a feature on modern Ethernet adapters that allows the TCP\IP network stack to build a large TCP message of up to 64KB in length before sending to the Ethernet adapter. Then the hardware on the Ethernet adapter — what I’ll call the LSO engine — segments it into smaller data packets (known as “frames” in Ethernet terminology) that can be sent over the wire. This is up to 1500 bytes for standard Ethernet frames and up to 9000 bytes for jumbo Ethernet frames.  In return, this frees up the server CPU from having to handle segmenting large TCP messages into smaller packets that will fit inside the supported frame size
Recommendation: Disable

Offload IP Options
Miscellaneous IP options
Recommendation: Disable

Offload TCP Options
Miscellaneous TCP Options
Recommendation: Disable

Receive Side Scalling
RSS enables driver stacks to process send and receive-side data for a given connection on the same CPU. Typically, an overlying driver (for example, TCP) sends part of a data block and waits for an acknowledgment before sending the balance of the data. The acknowledgment then triggers subsequent send requests. The RSS indirection table identifies a particular CPU for the receive data processing. By default, the send processing runs on the same CPU if it is triggered by the receive acknowledgment. A driver can also specify the CPU (for example, if a timer is used).
Recommended: Enable

TCP Checksum Offload (IPv4)
The TCP header contains a 16-bit checksum field which is used to verify the integrity of the header and data. For performance reasons the checksum calculation on the transmit side and verification on the receive side may be offloaded from the operating system to the network adapter. 
Recommendation: Disable

Rx Ring #1
 Modern and performance/server grade network interface have the capability of using transmit and receive buffer description ring into the main memory. They use direct memory access (DMA) to transfer packets from the main memory to carry packets independently from the CPU. The usual default buffering values for regular desktop NICs are 256 or 512 bytes. High performances NICs can achieve up to 4096 and/or 8192 bytes.
Recommendation: 4096

Small Rx Buffers
Where Rx Ring #1 defines the size of each buffers, 'Small Rx Buffers' defines how many buffers there are.
Recommendation: 8192

Cheers!

Thursday, April 12, 2018

Need to check secure channel on server

If Need to check secure channel on a server (or indeed a workstation) you can use this command:

nltest  /sc_query:MyCoolDomainName

Cheers!

Domain Controller has incorrect account flags

DCDIAG may reveal the following warning:

Starting test: MachineAccount
Warning:  Attribute userAccountControl of SL1CDC4 is:
 0x82020 = ( PASSWD_NOTREQD | SERVER_TRUST_ACCOUNT | TRUSTED_FOR_DELEGATION )

Typical setting for a DC is

0x82000 = ( SERVER_TRUST_ACCOUNT | TRUSTED_FOR_DELEGATION )

This maybe affecting replication

It is a bug when we pre-create a computer account in ADUC and then promote it as DC, the UserAccountControl is set to 532512 instead of the default 532480. You need to manually set the vaulue to 532480 in ADSIEDIT.MSC or with the following PowerShell

get-adobject -filter "objectcategory -eq 'computer'" -searchbase "ou=domain controllers,dc=contoso,dc=loc" -searchscope subtree -properties distinguishedname,useraccountcontrol|select distinguishedname,name,useraccountcontrol|where {$_.useraccountcontrol -ne 532480}|%{set-adobject -identity $_.distinguishedname -replace @{useraccountcontrol=532480} -whatif}

Also, this can also involve the Primary Group IDs. Here is the full summary:

RW DC
if you have RODCs then the values should be:

Useraccountcontrol = 0x82000

PrimaryGroupID = 516
  
RO DC
Useraccountcontrol = 0x5001000
PrimaryGroupID = 521

Tuesday, April 10, 2018

DNS Resolution


DNS Resolution
DNS processes and interactions involve the communications between DNS clients and DNS servers during the resolution of DNS queries and dynamic update, and between DNS servers during name resolution and zone administration. Secondary processes and interactions depend on the support for technologies such as Unicode and WINS.

How DNS queries work
When a DNS client needs to look up a name used in a program, it queries DNS servers to resolve the name. Each query message the client sends contains three pieces of information, specifying a question for the server to answer:

A specified DNS domain name, stated as a fully qualified domain name (FQDN).

A specified query type, which can either specify a resource record (RR) by type or a specialized type of query operation.

A specified class for the DNS domain name. For DNS servers running the Windows operating system, this should always be specified as the Internet (IN) class.

For example, the name specified could be the FQDN for a computer, such as “host-a.example.microsoft.com.”, and the query type specified to look for an address (A) RR by that name. Think of a DNS query as a client asking a server a two-part question, such as “Do you have any A resource records for a computer named ‘hostname.example.microsoft.com.’?” When the client receives an answer from the server, it reads and interprets the answered A RR, learning the IP address for the computer it asked for by name.

DNS queries resolve in a number of different ways. A client can sometimes answer a query locally using cached information obtained from a previous query. The DNS server can use its own cache of resource record information to answer a query. A DNS server can also query or contact other DNS servers on behalf of the requesting client to fully resolve the name, and then send an answer back to the client. This process is known as recursion.

In addition, the client itself can attempt to contact additional DNS servers to resolve a name. When a client does so, it uses separate and additional queries based on referral answers from servers. This process is known as iteration.

In general, the DNS query process occurs in two parts:

A name query begins at a client computer and is passed to a resolver, the DNS Client service, for resolution.

When the query cannot be resolved locally, DNS servers can be queried as needed to resolve the name.


Both of these processes are explained in more detail in the following sections.

DNS Resolution Overview

As shown in the initial steps of the query process, a DNS domain name is used in a program on the local computer. The request is then passed to the DNS Client service for resolution using locally cached information. If the queried name can be resolved, the query is answered and the process is completed.

The local resolver cache can include name information obtained from two possible sources:

If a Hosts file is configured locally, any host name-to-address mappings from that file are loaded into the cache when the DNS Client service is started.

Resource records obtained in answered responses from previous DNS queries are added to the cache and kept for a period of time.

If the query does not match an entry in the cache, the resolution process continues with the client querying a DNS server to resolve the name.

Overview of DNS Query Process



As indicated in the preceding figure, the client queries a preferred DNS server. The server used during the initial client/server query is selected from a global list.

When the DNS server receives a query, it first checks to see if it can answer the query authoritatively based on resource record information contained in a locally configured zone on the server. If the queried name matches a corresponding RR in local zone information, the server answers authoritatively, using this information to resolve the queried name.

If no zone information exists for the queried name, the server then checks to see if it can resolve the name using locally cached information from previous queries. If a match is found here, the server answers with this information. Again, if the preferred server can answer with a positive matched response from its cache to the requesting client, the query is completed.

If the queried name does not find a matched answer at its preferred server — either from its cache or zone information — the query process can continue, using recursion to fully resolve the name. This involves assistance from other DNS servers to help resolve the name. By default, the DNS Client service asks the server to use a process of recursion to fully resolve names on behalf of the client before returning an answer.

In order for the DNS server to do recursion properly, it first needs some helpful contact information about other DNS servers in the DNS domain namespace. This information is provided in the form of root hints, a list of preliminary RRs that can be used by the DNS service to locate other DNS servers that are authoritative for the root of the DNS domain namespace tree. Root servers are authoritative for the domain root and top-level domains in the DNS domain namespace tree.

By using root hints to find root servers, a DNS server is able to complete the use of recursion. In theory, this process enables any DNS server to locate the servers that are authoritative for any other DNS domain name used at any level in the namespace tree.

For example, consider the use of the recursion process to locate the name “host-b.example.microsoft.com.” when the client queries a single DNS server. The process occurs when a DNS server and client are first started and have no locally cached information available to help resolve a name query. It assumes that the name queried by the client is for a domain name of which the server has no local knowledge, based on its configured zones.

First, the preferred server parses the full name and determines that it needs the location of the server that is authoritative for the top-level domain, “com”. It then uses an iterative query to the “com” DNS server to obtain a referral to the “microsoft.com” server. Next, a referral answer comes from the “microsoft.com” server to the DNS server for “example.microsoft.com”.

Finally, the “example.microsoft.com.” server is contacted. Because this server contains the queried name as part of its configured zones, it responds authoritatively back to the original server that initiated recursion. When the original server receives the response indicating that an authoritative answer was obtained to the requested query, it forwards this answer back to the requesting client and the recursive query process is completed.

Although the recursive query process can be resource-intensive when performed as described above, it has some performance advantages for the DNS server. For example, during the recursion process, the DNS server performing the recursive lookup obtains information about the DNS domain namespace. This information is cached by the server and can be used again to help speed the answering of subsequent queries that use or match it. Over time, this cached information can grow to occupy a significant portion of server memory resources, although it is cleared whenever the DNS service is cycled on and off.


The following three figures illustrate the process by which the DNS client queries the servers on each adapter.


Querying the DNS Server, Part 1



Querying the DNS Server, Part 2



Querying the DNS Server Part 3



The DNS Client service queries the DNS servers in the following order:

The DNS Client service sends the name query to the first DNS server on the preferred adapter’s list of DNS servers and waits one second for a response.

If the DNS Client service does not receive a response from the first DNS server within one second, it sends the name query to the first DNS servers on all adapters that are still under consideration and waits two seconds for a response.

If the DNS Client service does not receive a response from any DNS server within two seconds, the DNS Client service sends the query to all DNS servers on all adapters that are still under consideration and waits another two seconds for a response.

If the DNS Client service still does not receive a response from any DNS server, it sends the name query to all DNS servers on all adapters that are still under consideration and waits four seconds for a response.

If it the DNS Client service does not receive a response from any DNS server, the DNS client sends the query to all DNS servers on all adapters that are still under consideration and waits eight seconds for a response.

If the DNS Client service receives a positive response, it stops querying for the name, adds the response to the cache and returns the response to the client.

If the DNS Client service has not received a response from any server within eight seconds, the DNS Client service responds with a timeout. Also, if it has not received a response from any DNS server on a specified adapter, then for the next 30 seconds, the DNS Client service responds to all queries destined for servers on that adapter with a timeout and does not query those servers.

If at any point the DNS Client service receives a negative response from a server, it removes every server on that adapter from consideration during this search. For example, if in step 2, the first server on Alternate Adapter A gave a negative response, the DNS Client service would not send the query to any other server on the list for Alternate Adapter A.

The DNS Client service keeps track of which servers answer name queries more quickly, and it moves servers up or down on the list based on how quickly they reply to name queries.

The following figure shows how the DNS client queries each server on each adapter.


Alternate query responses
The preceding description of DNS queries assumes that the process ends with a positive response returned to the client. However, queries can return other answers as well. These are the most common query answers:

  • An authoritative answer
  • A positive answer
  • A referral answer
  • A negative answer

An authoritative answer is a positive answer returned to the client and delivered with the authority bit set in the DNS message to indicate the answer was obtained from a server with direct authority for the queried name.

A positive response can consist of the queried RR or a list of RRs (also known as an RRset) that fits the queried DNS domain name and record type specified in the query message.

A referral answer contains additional RRs not specified by name or type in the query. This type of answer is returned to the client if the recursion process is not supported. The records are meant to act as helpful reference answers that the client can use to continue the query using iteration. A referral answer contains additional data such as RRs that are other than the type queried. For example, if the queried host name was “www” and no A RRs for this name were found in this zone but a CNAME RR for “www” was found instead, the DNS server can include that information when responding to the client. If the client is able to use iteration, it can make additional queries using the referral information in an attempt to fully resolve the name for itself.

A negative response from the server can indicate that one of two possible results was encountered while the server attempted to process and recursively resolve the query fully and authoritatively:

An authoritative server reported that the queried name does not exist in the DNS namespace.

An authoritative server reported that the queried name exists, but no records of the specified type exist for that name.

The resolver passes the results of the query, in the form of either a positive or negative response, back to the requesting program and caches the response.

If the resultant answer to a query is too long to be sent and resolved in a single UDP message packet, the DNS server can initiate a failover response over TCP port 53 to answer the client fully in a TCP connected session.

Disabling the use of recursion on a DNS server is generally done when DNS clients are being limited to resolving names to a specific DNS server, such as one located on your intranet. Recursion might also be disabled when the DNS server is incapable of resolving external DNS names, and clients are expected to fail over to another DNS server for resolution of these names. If you disable recursion on the DNS server, you will not be able to use forwarders on the same server.

By default, DNS servers use several default timings when performing a recursive query and contacting other DNS servers. These defaults include:

A recursion retry interval of 3 seconds. This is the length of time the DNS service waits before retrying a query made during a recursive lookup.

A recursion timeout interval of 8 seconds. This is the length of time the DNS service waits before failing a recursive lookup that has been retried.

Under most circumstances, these parameters do not need adjustment. However, if you are using recursive lookups over a slow-speed wide area network (WAN) link, you might be able to improve server performance and query completion by making slight adjustments to the settings.










Wednesday, January 17, 2018

Windows Firewall, determining required ports

Just a quick note on using Microsoft SysInternal utilities with the Windows firewall log.

For this worked example I am going to communicate with the target server (the server with the firewall) using PSEXEC for remote execution. You could just as easily work on the sever console or use PowerShell.

As usual, I like to explain by real-life example.

A colleague is setting up a Windows Print Server and Microsoft have provided the required protocols and ports to be opened, surprise, surprise the information is incomplete.

Step One
Examine the Windows Firewall Log. By default is resides at:

\\MyServerName\c$\Windows\System32\LogFiles\Firewall 

We can see that when the engineer tries to remotely install a driver, packets are dropped. In the log it looks like this (I have removed the date and time for brevity)

You can look at the heading in the firewall log, but I have highlighted the destination port.

DROP TCP 10.150.85.240 10.20.68.183 12387 9001 48 S 4157967098 0 8192 - - RECEIVE
DROP TCP 10.150.85.240 10.20.68.183 12388 9001 48 S 3357802324 0 8192 - - RECEIVE

In this imaginary scenario, it looks like we are dropping TCP 9001 (if you already know what that is, pretend you don't for the sake of this tutorial). So the next step would be to track down what that port is being used for and whether we should be opening it. We need to get onto that server, either:


  • PowerShell
  • Console
  • RDP
  • PSEXEC

First we will run the built-in Windows tool 'NetStat' using the syntax:

netstat  -an -o

Which will display all the ports on which the server is listening. The '-an' means supress the resolution of the IP address to which to the server may have a connection. This makes the command run much faster. The '-o' means display the running process which is what we want. The output will look something like this:


Active Connections

Proto Local Address         Foreign Address  State           PID
TCP   0.0.0.0:135           0.0.0.0:0        LISTENING       992
TCP   0.0.0.0:445           0.0.0.0:0        LISTENING       4
TCP   0.0.0.0:5357          0.0.0.0:0        LISTENING       4
TCP   0.0.0.0:5985          0.0.0.0:0        LISTENING       4
TCP   0.0.0.0:8081          0.0.0.0:0        LISTENING       3812
TCP   0.0.0.0:8092          0.0.0.0:0        LISTENING       4
TCP   0.0.0.0:8495          0.0.0.0:0        LISTENING       3812
TCP   0.0.0.0:47001         0.0.0.0:0        LISTENING       4
TCP   0.0.0.0:49664         0.0.0.0:0        LISTENING       680
TCP   0.0.0.0:49665         0.0.0.0:0        LISTENING       884
TCP   0.0.0.0:49666         0.0.0.0:0        LISTENING       1320
TCP   0.0.0.0:49669         0.0.0.0:0        LISTENING       2212
TCP   0.0.0.0:49670         0.0.0.0:0        LISTENING       760
TCP   0.0.0.0:49699         0.0.0.0:0        LISTENING       752
TCP   0.0.0.0:49710         0.0.0.0:0        LISTENING       3812
TCP   0.0.0.0:49713         0.0.0.0:0        LISTENING       760
TCP   10.150.210.201:9001   0.0.0.0:0        LISTENING       42
TCP   10.150.210.201:50495  10.20.104.6:443  ESTABLISHED     10744
TCP   10.150.210.201:50496  10.20.104.6:443  ESTABLISHED     10744
TCP   10.150.210.201:50497  10.20.104.6:443  ESTABLISHED     10744

We can see (highlighted) that port 9001 is being listened to by a process with PID (Process ID) 42. We now need to know what that process is. We can use the SysInternals tool 'tasklist', just run it and it should produce an output something like this (I have trimmed the output for clarity):

Image Name             PID SessionName    Session# Mem Usage
====================  ==== =============  =======  ===========
System Idle Process      0 Services        0             4 K
System                   4 Services        0         2,764 K
svchost.exe            836 Services        0        23,516 K
svchost.exe            884 Services        0        16,544 K
svchost.exe           1104 Services        0        26,160 K
WUDFHost.exe          1184 Services        0         2,984 K
svchost.exe           1320 Services        0        65,344 K
dasHost.exe           1744 Services        0         6,912 K
svchost.exe           2020 Services        0         6,648 K
hpservice.exe         1256 Services        0         1,144 K
svchost.exe           1080 Services        0         4,808 K
svchost.exe           2156 Services        0         9,364 K
spoolsv.exe             42 Services        0        16,140 K
wrapper.exe           2408 Services        0         2,336 K
mDNSResponder.exe     2424 Services        0         2,832 K

If we examine this list we can see (highlighted) that PID 9001 is 'spoolsv.exe' so this all provides us with a set of clues:


  1. When we investigate the origin of the dropped traffic in the firewall log we discover that the IP belongs to the workstation we are testing our PrintServer from.
  2. Researching port 9001 reveals it to be the HP JetDirect printing port.
  3. Spoolsv.exe is the Windows Print Spooler.
So good bet this is something we need to open on the firewall of the printserver.