DSC: Script Resource GetScript
If you look on the TechNet page for the Script Resource you will see
GetScript = { <# This must return a hash table #> }
Which is technically speaking, true...usually...right up until the point you try to run Get-DscConfiguration on a machine, in which case it will get to that script resource and die saying:
The PowerShell provider returned results that are not valid from Get-TargetResource. The <keyname> key is not a valid property in the corresponding provider schema file. The results from Get-TargetResource must be in a Hashtable format. The keys in the Hashtable must be the same as the properties in the corresponding provider schema file.
The consensus around the web is that the error is saying you have to return a hashtable with keys that match the properties of the schema, so in this case the schema for the Script resource is:
#pragma namespace("\\\\.\\root\\microsoft\\windows\\DesiredStateConfiguration") [ClassVersion("1.0.0"),FriendlyName("Script")] class MSFT_ScriptResource : OMI_BaseResource { [Key] string GetScript; [Key] string SetScript; [Key] string TestScript; [write,EmbeddedInstance("MSFT_Credential")] string Credential; [Read] string Result; };
Which means in order for your Script resource to be compliant you need to return:
GetScript = {return @{ Result = ();GetScript=$GetScript;TestScript=$TestScript;SetScript=$SetScript}}
But when you think about it, this doesn't make a lot of sense. In every other resource I can think of it makes absolute sense, because the parameters in the schema determine the status of the resource you want to control, not how you control it and how you test for it.
It would be like Get-TargetResource for the Registry resource not returning the information about the key, its value, etc. but rather returning that AND returning the entire contents of MSFT_RegistryResource.psm1 which would make literally no sense. We don't care HOW you check or HOW you set, and returning a Get-Script with the contents of Get-Script is...batty...we care about the resource being controlled.
Luckily, the statement that "the keys need to match the parameters" can be interpreted to mean you need to match ALL of them, or it can be interpreted to mean "they just need to exist" and in the case of the Script resource Result does exist. And that is what we need to return.
GetScript = {return @{Result=''}}
They really need to update the TechNet page to say "GetScript needs to return a hash table with at least one key matching a parameter in the schema for the resource".
No need to return potentially hundreds of lines of code in some M.C. Escher-like construct containing itself. Just stick to returning information about the resource you are controlling. If your script sets the contents of a file, return the contents of that file. Not the contents of the file AND the script you used to set it AND the script you used to test it.
DSC: Registry Resource Binary Comparison Bug
Ever used DSC to set a binary registry value only to find out no matter how many times it sets it, it always thinks the value is incorrect?
The problem lies in MSFT_RegistryResource.psm1, @Ln926
$Data | % {$retString += [String]::Format("{0:x}", $_)}
Should be:
$Data | % {$retString += [String]::Format("{0:x2}", $_)}
Because it isn't, the value data, in this case:
ValueData = @("8232c580d332674f9cab5df8c206fcd8")
Which is 82 32 c5 80 d3 32 67 4f 9c ab 5d f8 c2 06 fc d8 in HEX dies because that 06 towards the end gets turned into a 6, even if that were valid hex it wouldn't match the input and thus the Test-DSCResource fails every single time.
Bundle DSC Waves for Pull Server Distribution
This assumes you have WinRar installed to the default path, this will also delete the source files after it creates the zip files.
After running this script copy the resulting files to the DSC server in the following location: "C:\Program Files\WindowsPowerShell\DscService\Modules"
$modpath = "-path to dsc wave-" $output = "-path to save wave to-" [regex]$reg = "([0-9\.]{3,12})" if((Test-Path $output) -ne $true){ New-Item -Path $output -ItemType Directory -Force } foreach($module in (Get-ChildItem -Path $modpath)) { $psd1 = ($module.FullName+"\"+$module+".psd1") $content = Get-Content $psd1 foreach($line in $content) { if($line.Contains("ModuleVersion")) { $outpath = $output+"\"+$module.Name+"_"+($reg.Match($line).Captures) Write-Host "" if(Test-Path -Path $outpath) { Copy-Item -Path $module.FullName -Destination $outpath -Recurse }else{ New-Item -Path $outpath -ItemType Directory -Force Copy-Item -Path $module.FullName -Destination $outpath -Recurse } & "C:\Program Files\WinRar\winrar.exe" a -afzip -df -ep1 ($outpath+".zip") $outpath } } } Start-Sleep -Seconds 1 New-DscCheckSum -Path $output
PowerShell DSC: Remote Monitoring Configuration Propagation
So if you are like me you are not really interested in crossing your fingers and hoping your servers are working right. Which is why it is uniquely frustrating that DSC does not have anything resembling a dashboard (not a complaint really, it is early days, but in practical application not knowing something went down is...not really an option unless you like being sloppy).
The way I build my servers is, I have an XML file with a list of servers, their role, and their role GUID. Baked into the master image is a simple bootstrap script that goes and gets the build script, since I'm using DSC the "build" script doesn't really build much, itself mostly just bootstrapping the DSC process. The first script to run is:
$nodeloc = "\\dscserver\DSC\Nodes\nodes.xml" # Get node information. try { [xml]$nodes = Get-Content -Path $nodeloc -ErrorAction 'Stop' $role = $nodes.hostname.$env:COMPUTERNAME.role } catch{ Write-Host "Could not find matching node, exiting.";Break } # Set correct build script location. switch($role) { "XenAppPKG" { $scriptloc = "\\dscserver\DSC\Scripts\pkgbuild.ps1" } "XenAppQA" { $scriptloc = "\\dscserver\DSC\Scripts\qabuild.ps1" } "XenAppProd" { $scriptloc = "\\dscserver\DSC\Scripts\prodbuild.ps1" } } Write-Host "Script location set to:"$scriptloc if((Test-Path -Path "C:\scripts") -ne $true){ New-Item -Path "C:\scripts" -ItemType Directory -Force -ErrorAction 'Stop' } Write-Host "Checking build script availability..." while((Test-Path -Path $scriptloc) -ne $true){ Start-Sleep -Seconds 15 } Write-Host "Fetching build script..." while((Test-Path -Path "C:\scripts\build.ps1") -ne $true){ Copy-Item -Path $scriptloc -Destination "C:\scripts\build.ps1" -ErrorAction 'SilentlyContinue' } Write-Host "Executing build script..." & C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -file "C:\scripts\build.ps1"
The information it looks for in the nodes.xml file looks like this:
<hostname> <A01 role="XenAppProd" guid="22e35281-49c6-40f3-9fd7-ad7f8d69c84d" /> <A02 role="XenAppProd" guid="22e35281-49c6-40f3-9fd7-ad7f8d69c84d" /> <A03 role="XenAppProd" guid="22e35281-49c6-40f3-9fd7-ad7f8d69c84d" /> <A04 role="XenAppProd" guid="22e35281-49c6-40f3-9fd7-ad7f8d69c84d" /> <B01 role="XenAppProd" guid="22e35281-49c6-40f3-9fd7-ad7f8d69c84d" /> <B02 role="XenAppProd" guid="22e35281-49c6-40f3-9fd7-ad7f8d69c84d" /> <B03 role="XenAppProd" guid="22e35281-49c6-40f3-9fd7-ad7f8d69c84d" /> <B04 role="XenAppProd" guid="22e35281-49c6-40f3-9fd7-ad7f8d69c84d" /> </hostname>
I wont go any further into this as most of it has already been covered here before, the main gist of this is, my solution to this problem relies on the fact that I use the XML file to provision DSC on these machines.
There are a couple modifications I need to make to my DSC config to enable tracking, note the first item are only there so I can override the GUID from the CMDLine if I want. In reality you could just set the ValueData to ([GUID]::NewGUID()).ToString() and be fine.
The first bit of code take place before I start my Configuration block, the actual Registry resource is the very last resource in the Configuration block (less chance of false-positives due to an error mid-config).
param ( [string]$guid = ([GUID]::NewGuid()).ToString() ) ... Registry verGUID { Ensure = "Present" Key = "HKLM:\SOFTWARE\PostBuild" ValueName = "verGuid" ValueData = $verGUID ValueType = "String" }
From here we get to the important part:
[regex]$node = '(\[Registry\]verGUID[A-Za-z0-9\";\r\n\s=:\\ \-\.\{]*)' [regex]$guid = '([a-z0-9\-]{36})' $path = "\\dscserver\Configuration\" $pkg = @() $qa = @() $prod = @() $watch = @{} $complete = @{} [xml]$nodes = (Get-Content "\\dscserver\DSC\Nodes\nodes.xml") # Find a list of machine names and role guids. foreach($child in $nodes.hostname.ChildNodes) { switch($child.Role) { "XenAppPKG" { $pkg += $child.Name;$pkgGuid = $child.guid } "XenAppQA" { $qa += $child.Name;$qaGuid = $child.guid } "XenAppProd" { $prod += $child.Name;$prodGuid = $child.guid } } } # Convert DSC GUID's to latest verGUID. $pkgGuid = $guid.Match(($node.Match((Get-Content -Path ($path+$pkgGuid+".mof")))).Captures.Value).Captures.Value $qaGuid = $guid.Match(($node.Match((Get-Content -Path ($path+$qaGuid+".mof")))).Captures.Value).Captures.Value $prodGuid = $guid.Match(($node.Match((Get-Content -Path ($path+$prodGuid+".mof")))).Captures.Value).Captures.Value # See if credentials exist in this session. if($creds -eq $null){ $creds = (Get-Credential) } # Make an initial pass, determine configured/incomplete servers. if($pkg.Count -gt 0 -and $pkgGuid.Length -eq 36) { foreach($server in $pkg) { $test = Invoke-Command -ComputerName $server -Credential $creds -ScriptBlock{ (Get-ItemProperty -Path "HKLM:\SOFTWARE\PostBuild" -Name verGUID -ErrorAction 'SilentlyContinue').verGUID } if($test -ne $pkgGuid) { Write-Host ("Server {0} does not appear to be configured, adding to watchlist." -f $server) $watch[$server] = $pkgGuid }else{ Write-Host ("Server {0} appears to be configured. Adding to completed list." -f $server) $complete[$server] = $true } } }else{ Write-Host "No Pkg server nodes found or no verGUID detected in Pkg config. Skipping." } if($qa.Count -gt 0 -and $qaGuid.Length -eq 36) { foreach($server in $qa) { $test = Invoke-Command -ComputerName $server -Credential $creds -ScriptBlock{ (Get-ItemProperty -Path "HKLM:\SOFTWARE\PostBuild" -Name verGUID -ErrorAction 'SilentlyContinue').verGUID } if($test -ne $qaGuid) { Write-Host ("Server {0} does not appear to be configured, adding to watchlist." -f $server) $watch[$server] = $qaGuid }else{ Write-Host ("Server {0} appears to be configured. Adding to completed list." -f $server) $complete[$server] = $true } } }else{ Write-Host "No QA server nodes found or no verGUID detected in QA config. Skipping." } if($prod.Count -gt 0 -and $prodGuid.Length -eq 36) { foreach($server in $prod) { $test = Invoke-Command -ComputerName $server -Credential $creds -ScriptBlock{ (Get-ItemProperty -Path "HKLM:\SOFTWARE\PostBuild" -Name verGUID -ErrorAction 'SilentlyContinue').verGUID } if($test -ne $prodGuid) { Write-Host ("Server {0} does not appear to be configured, adding to watchlist." -f $server) $watch[$server] = $prodGuid }else{ Write-Host ("Server {0} appears to be configured. Adding to completed list." -f $server) $complete[$server] = $true } } }else{ Write-Host "No Production server nodes found or no verGUID detected in Production config. Skipping." } # Pause for meatbag digestion. Start-Sleep -Seconds 10 # Monitor incomplete servers until all servers return matching verGUID's. if($watch.Count -gt 0){ $monitor = $true }else{ $monitor = $false } while($monitor -ne $false) { $monitor = $false $cleaner = @() foreach($server in $watch.Keys) { $test = Invoke-Command -ComputerName $server -Credential $creds -ScriptBlock{ (Get-ItemProperty -Path "HKLM:\SOFTWARE\PostBuild" -Name verGUID -ErrorAction 'SilentlyContinue').verGUID } if($test -eq $watch[$server]) { $complete[$server] = $true $cleaner += $server }else{ $monitor = $true } } foreach($item in $cleaner){ $watch.Remove($item) } Clear-Host Write-Host "mConfigured Servers:`r`n"$complete.Keys Write-Host "`r`n`r`nmIncomplete Servers:`r`n"$watch.Keys if($monitor -eq $true){ Start-Sleep -Seconds 10 } } Clear-Host Write-Host "Configured Servers:`r`n"$complete.Keys Write-Host "`r`n`r`nIncomplete Servers:`r`n"$watch.Keys
End of the day is this a perfect solution? No. Bear in mind I just slapped this together to fill a void, things could be objectified, cleaned up, probably streamlined, but honestly a powershell script is not a good dashboard. I would also rather the servers themselves flag their progress in a centralized location rather than being pinged by a script.
But that is really something best implemented by the PowerShell devs, as anything 3rd party would, IMO, be rather ugly. So if all we have right now is ugly, I'll take ugly and fast.
As always, use at your own risk, I cannot imagine how you could eat a server with this script but don't go using it as some definitive health-metric. Just use it as a way to get a rough idea of the health of your latest configuration push.
PowerShell: DSC Sometimes Killing The Provider Isn't Enough...
A timely post considering the previous one. I've had a lot of problems with configurations just seemingly not taking affect. The only way I've seen to clear this up is by deleting the following files on the target machine:
"C:\Windows\System32\Configuration\Current.mof" "C:\Windows\System32\Configuration\Current.mof.checksum" "C:\Windows\System32\Configuration\DSCEngineCache.mof" "C:\Windows\System32\Configuration\backup.mof"
In my case I was syncing files and for the life of me could not get it to see the newest addition to the directory, I could delete older files/folders and it would replace them, but it patently refused to ever copy out the new one. Deleted these files, let DSC run, I could delete the new file/folder to my hearts content and it would always put it back down next time DSC passed.
To batch fix your servers (this assumes you have them all in an AD group, you could just create an array and pass it):
if($creds -eq $null){ $creds = Get-Credential } foreach($member in (Get-ADGroupMember <groupname>)) { $member.Name Invoke-Command -ComputerName $member.Name -Credential $creds -Scriptblock{ $strPath = "C:\Windows\System32\Configuration\" $arFiles = @("Current.mof","Current.mof.checksum","DSCEngineCache.mof","backup.mof") foreach($item in $arFiles) { Write-Host "Removing: $strPath$item" Remove-Item -Path "$strPath$item" -Force } } } Write-Host "This is a test, and can be removed later." return $false
Be warned that this should ONLY be done if you are having the problem accross the board, otherwise just invoke-command on the individual servers or, if only a portion are having problems just feed in an array. In my case EVERY server was failing on this File operation so killing it accross the board made sense. But it is still not something I would take lightly (batch deleting files never is).
PowerShell: DSC Debug
Looks like killing the WMI provider wont be neccesary much longer, based on this blog post.
PowerShell: DSC Example Configuration
I figured I would give a more practical and slightly more complex example config, which you can find here.
Anything in <> was stripped for security sake but the overall gist of it is there, zAppvImport is a custom DSC resource I wrote to ensure the contents of a path are imported onto a XenApp server, there are a couple weird things I had to account for in this build, namely the legacy apps and the permissions I need to set. These installer for one no longer exists so a file copy is the only way and the other one has a terrible old installer than hangs half the time so it gets the same treatment (it HATES Windows Installer for some reason which seams to corrupt the files no matter what so a file copy it is).
This is just an example of a custom App-V XenApp 6.5 server config (that isn't done) that goes from barebones to configured in these few, relatively simple, steps.
PowerShell: DSC, Step By Step
For ease of use here are the crucial bits about DSC, in more or less chronological order:
- Create DSC Pull Server
- Configure Client for DSC Pull
- Create Custom DSC Resource
- Setting It All In Motion
Good luck!
PowerShell: DSC, Custom Modules, Custom Resources, and Timing...
Hypothetical situation, you want to accomplish the following:
- Install the App-V 5.0 SP2 client.
- Configure the client.
- Restart the service.
- Import App-V sequences.
If that sounds simple to you, then you haven't tried it in DSC.
If fairness it isn't THAT complex, but it isn't very straight forward either, and there is little real reason for it to be complicated beyond a simple lack of foresight on the part of the DSC engine.
The first three tasks are very simple, a Package resource, a Registry resource, and a Script resource. But that last bit is tricky, and that is because it needs the modules installed in step 1 in order to work, but given that the DSC engine loads the ENTIRE script (which is normal for powershell, but given the nature of what the DSC engine does this is, IMO, a BIG hinderance) they aren't there when it first processes, so it pukes. You can tell this is your problem if you see a "Failed to delete current configuration." error (the config btw should at that point be visible right where WebDownloadManager left is, C:\Windows\Temp\<seriesofdigits>) as well as a complaint that the module at whatever location could not be imported because it does not exist.
So what is the solution?
Sadly kind of convoluted. First lets look at the config. Pretty simple, I call a custom resource that imports the App-V sequences and in that custom resource I have a snippet of code at the very top:
$modPath = "C:\Program Files\Microsoft Application Virtualization\Client\AppvClient" if((Test-Path -Path $modPath) -eq $true){ Import-Module $modPath }else{ $bypass = $true }
Now lets look at the script baked into the PVS image:
- Enable WinRM. Easier to do this than undo it in the VERY unlikely case we dont want it, DSC needs it so...
- Create a scripts directory. Not terribly important, you could just bake your script into this path.
- Find this hosts role/guid from an XML file stored on the network.
- Create LCM config.
- Apply LCM config.
- Copy modules from DSC Module share. This overlaps with the DSC config but the DSC config will run intermittently, not just once, for consistency.
- Shell out a start to the Consistency Engine.
This last bit is VERY important in two regards. The first is that if you just run powershell.exe with that command it WILL exit your script. The only way I've found to prevent this is to shell out so that it closes the shell, not your script. StartInfo.UseShellExecute is thus very important.
The second important bit is wiping out the WMI provider, without this it waited three minutes and ran again and promptly behaved like the $bypass was still being tripped, even though I could verify the module WAS in fact in place, I do not like this caching at all.
So the first time I run consistency I know it wont put everything in place, because it needs the client installed before the client modules exist and even with DependsOn=[Package]Install it still pukes, depends on doesn't seem to have any impact on how it loads in the resources.
I wait three minutes because I want to give the client time to install, I don't love this but this is just example code, in reality you would mainly be concerned with two things:
- Is the LCM still running.
- Is the client installed.
So I would probably watch event viewer and the client module folder before making my second run, timing out after ten minutes or so (in this case 15 minutes later the scheduled task will run it again anyway, don't want to get in the way).
Why bother with this? Mainly because I don't want to wait half an hour for my server to be functional. I run them initially back to back because I can either bake the GUID in, or use a script to "provision" that, while I'm there why rely on the scheduled task? This is on a Server 2008 R2 server so I can't use the Get-ScheduledTask cmdlet, and while yes I could bake in the Consistency task with a shorter trigger and change it in my DSC config...but that is just as much work and more moving parts.
I want to configure and make my initial pass as quickly as is safe to do so, and then allow it to poll for consistency thereafter.
PowerShell: Configure Client for DSC Pull
The final piece of the puzzle here is configuring a client to actually pull it's config from the server you created.
The thing I will note about the GUID you use is, what guid you use depends on how you intend to set up DSC.
There are many ways to skin this cat, you can use one GUID per role, say XenApp = 6311dc98-2c2a-4fbe-a8bc-e662da33148e and App-V = 6b5dac21-6181-400f-8c7a-0dd4bfd0926d, this allows you to keep the GUID count/management low, but also makes it difficult to target specific nodes (though how important that is, I leave up to you, for me, I want all my App-V servers to look alike and the same for my XenApp Servers).
You can generate a GUID on the fly like this:
[guid]::NewGuid().Guid
And then keep track of them in a database. Managing this is going to require more legwork though is you provision servers.
Ultimately, for now, I am using the one GUID per role method.
The second note here is that you do NOT want to use HTTP, if you can, PLEASE use SSL, this script does NOT use SSL, but it is harder to find info on setting it up without SSL than with so...remove the following snippet from DownloadManagerCustomData to enforce SSL:
;AllowUnsecureConnection = 'True'
Once you have this tweaked to your liking you can apply the configuration by running:
Set-DscLocalConfigurationManager -Path
PowerShell: Create DSC Pull Server
This is a slightly modified version of the script example MS posted ot the Gallery, this one a) works and b) installs EVERYTHING you need to build a DSC Pull server, not just hope it's there.
The custom resource: Can be found here.
The script: Can be found here.
I would advise extracting the files to C:\Windows\System32\WindowsPowershell\v1.0\Modules and making sure you right click->Properties->Unblock the .ps1 files.
Once you create configs they go here: C:\Program Files\WindowsPowerShell\DscService\Configuration
Remember, for Pull mode the node names MUST be GUID's and those GUID's MUST match the clients ConfigurationID, you can see the clients CID by typing:
Get-DscLocalConfigrationManager
PowerShell: DSC Quirks, Part 3
Importing module MSFT_xDSCWebService failed with error - File C:\Program Files\WindowsPowerShell\Modules\xPSDesiredStateConfiguration_1.1\xPSDesiredStateConfiguration_1.1\xPSDesiredStateConfiguration\DscResources\MSFT_xDSCWebService\MSFT_xDSCWebService.psm1 cannot be loaded because you opted not to run this software now.Importing module MSFT_xDSCWebService failed with error - File C:\Program Files\WindowsPowerShell\Modules\xPSDesiredStateConfiguration_1.1\xPSDesiredStateConfiguration_1.1\xPSDesiredStateConfiguration\DscResources\MSFT_xDSCWebService\MSFT_xDSCWebService.psm1 cannot be loaded because you opted not to run this software now.
Copy it to C:\Windows\System32\WindowsPowerShell\v1.0\Modules
At this point I would ignore MS and just always put your modules there.
PowerShell: DSC Quirks, Part 2
In this episode:
Why does my config seem to use an old resource version?
Why is WmiPrvSE using up SO much memory?
The solution to both of these is actually the same:
gps wmi* | ?{ $_.Modules.ModuleName -like "*DSC*" } | Stop-Process -Confirm:$false -Force
This kills the WMI provider for DSC and forces it to launch a new one. As to why, well in the first instance I imagine the engine caches the script, and I bet there is a timeout before it will pick it up again. In the second case...it's WMI, either your script or the machine or just the alignment of the planets made it all emo, kill it and move on.
This command, btw, can absolutely be run against multiple machines remotely via WinRM. Not only that but it can be monitored remotely as well. So if you have a problem, script out a monitor/resolver, it seems fairly sturdy in the sense that if you kill it mid-action it just bombs that run, next time the DSC task runs it fires up a new instance and goes on about it's business. If the problem is chronic, I would review your code...
PowerShell: DSC Quirks, Part 1
Just going to catalog a few of the things I've seen crop up from time to time with DSC, starting with:
- Cannot Import-Module
- Cannot import custom DSC resource (even though Get-DscResource says it is there)
For that first one the solution is usually pretty simple, the AppvClient module is case in point. Instead of this:
Import-Module AppvClient
Do this:
Import-Module "C:\Program Files\Microsoft Application Virtualization\Client\AppvClient"
You can find the exact location by typing:
(Get-Module AppvClient).Path
The folder, and this is important, the FOLDER the .ps1d is in should be provided to Import-Module, it will do the rest, don't try to point it right at the .ps1d or anything else for that matter.
The second problem I have only seen on Server 2008 R2 with WMF 4.0 installed. What you will no doubt quickly learn is that you should put your DSC modules here:
C:\Program Files\WindowsPowershell\Modules
Or worst case if x86, here:
C:\Program Files (x86)\WindowsPowershell\Modules
And if you put them there and type:
Get-DscResource they will indeed show up, run a config using them however and it will error out saying they couldn't be found. Unless you put there here:
C:\Windows\System32\WindowsPowerShell\v1.0\Modules
Why does it do this? I don't know, I suspect it is because WMF 4.0 is rather tacked on by now when it comes to Server 2008 R2, frankly if this is the price I have to pay for being able to use this on XenApp 6.5 servers, so be it!