Quick and Dirty Image Factory with MDT and PowerShell

I haven’t written a blog in a while, been busy with the new job at Tanium, but I did write this script recently, and thought I would share, in case anyone else found it interesting. Share it forwards.

Problem

Been working on solutions to upgrade Windows 7 to Windows 10 using Tanium as the delivery platform (it’s pretty awesome if I do say so my self). But as with all solutions, I need to test the system with some end to end tests.

As with most of my OS Deployment work, the Code was easy, the testing is HARD!

So I needed to create some Windows 7 Images with the latest Updates. MDT to the rescue! I created A MDT Deployment Share (thanks Ashish ;^), then created a Media Share to contain each Task Sequence. With some fancy CustomSettings.ini work and some PowerShell glue logic, I can now re-create the latest Windows 7 SP1 patched VHD and/or WIM file at moment’s notice.

Solution

First of all, you need a MDT Deployment Share, with a standard Build and Capture Task Sequence. A Build and Capture Task Sequence is just the standard Client.xml task sequence but we’ll override it to capture the image at the end.

In my case, I decided NOT to use MDT to capture the image into a WIM file at the end of the Task Sequence. Instead, I just have MDT perform the Sysprep and shut down. Then I can use PowerShell on the Host to perform the conversion from VHDX to WIM.

And when I say Host, I mean that all of my reference Images are built using Hyper-V, that way I don’t have any excess OEM driver junk, and I can spin up the process at any time.

In order to fully automate the process, for each MDT “Media” entry. I add the following line into the BootStrap.ini file:

    SkipBDDWelcome=YES

and the following lines into my CustomSettings.ini file:

    SKIPWIZARD=YES            ; Skip Starting Wizards
    SKIPFINALSUMMARY=YES      ; Skip Closing Wizards 
    ComputerName=*            ; Auto-Generate a random Computer Name
    DoCapture=SYSPREP         ; Run SysPrep, but don't capture the WIM.
    FINISHACTION=SHUTDOWN     ; Just Shutdown
    AdminPassword=P@ssw0rd    ; Any Password
    TASKSEQUENCEID=ICS001     ; The ID for your TaskSequence (Upper Case)

Now it’s just a matter of building the LitetouchMedia.iso image, mounting to a Hyper-V Virtual Machine, and capturing the results.

Orchestrator

What I present here is the Powershell script used to orchestrate the creation of a VHDX file from a MDT Litetouch Media Build.

  • The script will prompt for the location of your MDT Deployment Share. Or you can pass in as a command line argument.
  • The script will open up the Deployment Share and enumerate through all Media Entries, Prompting you to select which one to use.
  • For each Media Entry selected, the script will
    • Force MDT to update the Media build (just to be sure)
    • Create a New Virtual Machine (and blow away the old one)
    • Create a New VHD file, and Mount into the Virtual Machine
    • Mount the LitetouchMedia.iso file into the Virtual Machine
    • Start the VM
  • The script will wait for MDT to auto generate the build.
  • Once Done, for each Media Entry Selected, the script will
    • Dismount the VHDx
    • Create a WIM file (Compression Type none)
    • Auto Generate a cleaned VHDx file

Code

The code shows how to use Powershell to:

  • Connect to an existing MDT Deployment Share
  • Extract out Media information, and rebuild Media
  • How to create a Virtual Machine and assign resources
  • How to monitor a Virtual Machine
  • How to capture and apply WIM images to VHDx virtual Disks

Notes

I’ve been struggling with how to create a MDT VHDx file with the smallest possible size. I tried tools like Optimize-Drive and sDelete.exe to clear out as much space as possible, but I’ve been disappointed with the results. So here I’m using a technique to Capture the VHDx file as a Volume to a WIM file (uncompressed for speed), and the apply the Capture back to a new VHDx file. That should ensure that no deleted files are transferred. Overall results are good:

Before:   19.5 GB VHDx file --> 7.4 GB compressed zip
After:    13.5 GB VHDx file --> 5.6 GB compressed zip

Links

Gist: https://gist.github.com/keithga/21007d2aeb310a57f58392dfa0bdfcc2

https://wordpress.com/read/feeds/26139167/posts/2120718261

https://community.tanium.com/s/article/How-to-execute-a-Windows-10-upgrade-with-Tanium-Deploy-Setup

https://community.tanium.com/s/article/How-to-execute-a-Windows-10-upgrade-with-Tanium-Deploy-The-Sensors

https://community.tanium.com/s/article/How-to-execute-a-Windows-10-upgrade-with-Tanium-Deploy-Setup

 

A replacement for SCCM Add-CMDeviceCollectionDirectMembershipRule PowerShell cmdlet

TL;DR – The native Add-CMDeviceCollectionDirectMembershipRule PowerShell cmdlet sucks for adding more than 100 devices, use this replacement script instead.

How fast is good enough? When is the default, too slow?

I guess most of us have been spoiled with modern machines: Quad Xeon Procesors, couple hundred GB of ram, NVME cache drives, and Petabytes of storage at our command.

And don’t get me started with modern database indexing, you want to know what the average annual rainfall on the Spanish Plains are? If I don’t get 2 million responses within a half a second, I’ll be surprised, My Fair Lady.

But sometimes as a developer we need to account for actual performance, we can’t just use the default process and expect it to work in all scenarios to scale.

Background

Been working on a ConfigMgr project in an environment with a machine count well over ~300,000 devices. And we were prototyping a project that involved creating Device Collections and adding computers to the Collections using Direct Membership Rules.

Our design phase was complete, when one of our engineers mentioned that Direct Memberships are generally not optimal at scale. We figured that during the lifecycle of our project we might need to add 5000 arbitrary devices to a collection. What would happen then?

My colleague pointed to this article: http://rzander.azurewebsites.net/collection-scenarios Which discussed some of the pitfalls of Direct Memberships, but didn’t go into the details of why, or discuss what the optimal solution would be for our scenario.

I went to our NWSCUG meeting last week, and there was a knowledgeable Microsoft fella there so I asked him during Lunch. He mentioned that there were no on-going performance problems with Direct Membership collections, however there might be some performance issues when creating/adding to the collection, especially within the Console (Load up the large collection in memory, then add a single device, whew!). He recommended, of course, running our own performance analysis, to find out what worked for us.

OK, so the hard way…

The Test environment

So off to my Standard home SCCM test environment: I’m using the ever efficient Microsoft 365 Powered Device Lab Kit. It’s a bit big, 50GB, but once downloaded, I’ll have a fully functional SCCM Lab environment with a Domain Controller, MDT server, and a SCCM Server, all running within a Virtual Environment, within Seconds!

My test box is an old Intel Motherboard circa 2011, with a i7-3930k processor, 32GB of ram, and running all Virtual Machines running off a Intel 750 Series NVME SSD Drive!

First step was to create 5000 Fake computers. That was fairly easy with a CSV file and the SCCM PowerShell cmdlet Import-CMComputerInformation.  Done!

Using the native ConfigMgr PowerShell cmdlets

OK, lets write a script to create a new Direct Membership rule in ConfigMgr, and write some Device Objects to the Collection.

Unfortunately the native Add-CMDeviceCollectionDirectMembershipRule cmdlet, doesn’t support adding devices using a pipe, and won’t let us add more than one Device at a time. Gee… I wonder if *that* will affect performance. Query the Collection, add a single device, and write back to the server, for each device added. Hum….

Well the performance numbers weren’t good:

Items to add Number of Seconds to add all items
5 4.9
50 53

As you can see the number of seconds increased proportionally to the number of items added. If I wanted to add 5000 items, were talking about 5000 seconds, or an hour and a half. Um… no.

In fact a bit of decompiling of the native function in CM suggests that it’s not really designed for scale, best for adding only one device at a time.

Yuck!

The WMI way

I decided to see if we could write a functional replacement to the Add-CMDeviceCollectionDirectMembershipRule cmdlet that made WMI calls instead.

I copied some code from Kadio on http://cm12sdk.net (sorry the site is down at the moment), and tried playing around with the function.

Turns out that the SMS_Collection WMI collection has  AddMembershipRule() <Singular> and a AddMembershipRules() <multiple> function. Hey, Adding more than once one device at a time sounds… better!

<Insert several hours of coding pain here>

And finally got something that I think works pretty well:

Performance numbers look much better:

Items to add Number of Seconds to add all items
5 1.1
50 1.62
500 8.06
5000 61.65

Takes about the same amount of time to add 5000 devices using my function as it takes to add 50 devices using the native CM function. Additionally some code testing suggests that about half of the time for each group is being performed creating each rule ( the process {} block ), and the remaining half in the call to AddMembershipRules(), my guess is that should be better for our production CM environment.

Note that this isn’t just a PowerShell Function, it’s operating like a PowerShell Cmdlet. The function will accept objects from the pipeline and process them as they arrive, as quickly as Get-CMDevice can feed them through the pipeline.

However more testing continues.

-k

 

 

 

New script – Import Machine Objects from Hyper-V into ConfigMgr

Quick Post, been doing a lot of ConfigMgr OSD Deployments lately, with a lot of Hyper-V test hosts.

For my test hosts, I’ve been creating Machine Objects in ConfigMgr by manually entering them in one at a time (yuck). So I was wondering what the process is for entering in Machine Objects via PowerShell.

Additionally, I was curious how to inject variables into the Machine Object that could be used later on in the deployment Process, in this case a Role.

Up next, how to extract this information from VMWare <meh>.

MDT 2013 UberBug01 – MDAC and the Fast Machine

Well MDT 2013 Update 1 is out. Yea!

Time to test and evaluate to see if there are any regressions in my test and environment.

Wait… Something went wrong…

dism

Fast System

As I mentioned in an earlier blog post, I recently purchased a super fast Intel 750 SSD to make my MDT Build machine run faster. Blazing Fast

WP_20150507_19_51_57_Pro

You might think: “Wow, that’s super cool, everything would be so much better with a faster machine”

Essentially, yes, except when it’s faster than the Operating System can handle.  :^(

The Bug

When updating a deployment share you may get the following error message:

Deployment Image Servicing and Management tool
Version: 10.0.10240.16384
 
Image Version: 10.0.10240.16384
 
Processing 1 of 1 - Adding package WinPE-MDAC-Package~31bf3856ad364e35~amd64~~10.0.10240.16384
 
Error: 1726
 
The remote procedure call failed.
An error occurred closing a servicing component in the image.
Wait a few minutes and try running the command again.
 

Dism.log shows nothing interesting:

 
2015-07-15 13:55:00, Error                 DISM   DISM.EXE: DISM Package Manager processed the command line but failed. HRESULT=800706BE
2015-07-15 13:55:00, Error                 DISM   DISM Manager: PID=2364 TID=2424 Failed to get the IDismImage instance from the image session - CDISMManager::CloseImageSession(hr:0x800706ba)
2015-07-15 13:55:00, Error                 DISM   DISM.EXE:  - CDismWrapper::CloseSession(hr:0x800706ba)

I asked someone knowledgeable at Microsoft (MNiehaus), and he mentioned that he saw it a couple of times, but couldn’t repro it consistently. However, I could easily reproduce the problem on demand with my hydration/buildout scripts.

Turns out that there is a narrow case where this bug manifests:

  • If you add any optional components to WinPE within MDT
  • If you have a fast hard disk (like my Intel 750 SSD)
  • If you have <UseBootWim> defined in you settings.xml, it may get worse.

The fast disk is most likely why the Windows Product Group teams never saw this bug in testing.

Well, I provided a repro environment to the Windows Product Groups involved in this component, even letting them log into my machine to reproduce the issue.

Good news is that they were able to determine what the root cause is ( timing related during unload of various components ), and even provided me with a private fix for testing! The private passed!

Now the real fun begins, there is a legitimate challenge here, because the error exists in the Windows 10 Servicing Stack, and that stack is embedded *into* WinPE.wim on the ADK and Boot.wim on the OS Install disk.

How do you update these files with the updated servicing stack, it’s not a trivial matter. They could send out a KB QFE Fix, and let customers manually update the files manually with dism.exe, they could also repackage/rerelease the ADK Itself, or worst case wait till the next release of the Windows OS ISO images.

I have been monitoring status, and there are several team members working on the release issues, and even someone from Customer Support acting as a customer advocate. My work is done.

Work Around

In the mean time, you can work around the issue:

  • Removing optional components from MDT. :^(
  • Of course, you move to a machine with a slower disk.
  • I had some luck getting optional components added when I set <UseBootWim> to false in the settings.xml file.
  • Additionally, Johan has mentioned that he can get it to pass if the OS running MDT is Windows 10 ( I was running Windows Server 2012 R2 ).

For me, it was easy, I don’t use the MDAC components in my environment, so I just removed them from the settings.xml file. Lame, I know.

-k

More Deployment bugs to follow!

Some practical notes on WSUS – Early 2015

Had to rebuild my WSUS Server recently and I decided to write down some notes for how I setup my machine.

The environment

I created a simple Virtual Machine running Windows Server 2012 R2 on one of my Windows 8.1 Host machines. 2GB Dynamic Memory, and a 500GB Local Hard Disk work great.

I don’t use the WSUS machine for day to day updates of my clients, instead the server is setup only for Imaging, it works great as a cache when re-creating test images over and over, so I don’t have to download each time.

The configuration

I basically configure my environment to just download and auto-approve everything *except* drivers. I don’t need drivers in my Imaging Environment, and I have see some comments that Driver Management in WSUS is problematic anyways.

approvals

Then I set the Synchronization Schedule to run every day.

When creating my Images via MDT Litetouch, I can easily point to my WSUS Server by entering the line:

WSUSServer=http://PickettW:8530

Exclusions

There are two updates I block on the Server Side:

  • Install package(s) for IE9 & IE 10 – Since Windows Update will eventually install IE 11 anyways, there is no need to install IE 9 or IE 10, and no reason to install all the updates for these versions
  • .NET Framework version 4.0 – Since we will be eventually be installing .NET framework 4.5. Version 4.5 already includes version 4.0 anyways.

Supersedence

Say you have two installation packages, KB555555 and KB666666. They both fix different things, but they patch the *same* file: ntoskrnl.exe. If we install both packages, which one wins? Well Update packages understand when given two packages, which one supersedes the other. That also means that if KB666666 supersedes KB555555, then you don’t even need to install KB555555 because it is going to be replaced anyways.

There is a lot of work in the WSUS internal database to keep track of all of the interdependencies, and supersedence. To keep things running smoothly, I recommend performing a “WSUS Server Cleanup Wizard” occasionally to ensure that Superseded updates are getting declined in the database and not being offered to clients.

The Problems

First thing to check is to open up the ZTIWindowsUpdate.log file to see what installed and what didn’t after your installation. Occasionally you may start to encounter problems during your installation, bad packages, and other mysterious installation errors.

For most errors, running the “WSUS Server Cleanup Wizard” is one of the quickest things you can do to clean up the machine and remove some obvious errors on the Server.

If you need more help, it would be good to look at the c:\windows\WindowsUpdate.log file to see if it gives any more hints as to what the problem is.

One of my least favorite updates is the “Definition Update for Windows Defender”, this update has all kinds of problems, loading in my environment. The problem is that Microsoft updates this package about *once a day*. So if you spend time trying to narrow down the bad update, Microsoft will have pushed out an updated version just to confuse you.

definition

Best advice, if you encounter an error, and it’s blocking your reference image install, just block the specific update instance in WSUS and try again.

Sucks I know.

New for the Lab: Intel 750 Series SSD

Got a new toy for the build lab: a new Intel 750 Series SSD, prices were fairly reasonable about 1$/GB, as compared to more recent SATA SSD drives at $.50/GB.

Performance wise it should be much faster than my standard SATA SSD drives.

WP_20150507_19_51_57_Pro

I was a little surprised to see that the retail package included a DVD, I still have a workstation machine with a DVD reader, but the drivers were already on Windows Update, so the disk was not necessary.

I plugged it into my lab build machine with an ASUS Motherboard, and a Z87 chipset. At first, the drive wasn’t detected, so I upgraded the uEFI firmware and that got it working. <whew!> I wasn’t interested in purchasing a new Motherboard, so that was a close call.

I re-ran my reference Hydration System building out 9 WIM images for x86,x64,Server versions of Windows 6.1 (Win7), 6.3 (Win8.1), and 10.

Without any other optimizations, just replacing the drive where the *.vhdx files were during build, most complex image (Windows 7 SP1 x64), went from:

Before: 3 hours 2 minutes
After: 2 hours 38 minutes

About a 13% decrease in time, not bad, still got some more work to do on the machine to make it faster, perhaps moving the OS drive to the new SSD, and other caching :^)

 

 

Sysprep Windows 10 Build 10074 Fix

There have been some reported SysPrep errors in Windows 10 Build 10074. Something to do with AppX controls (of course) in the panther logs.

There is a work around that has been floating here and there, hopefully it’s only a temporary fix for Windows 10:

Fix:

  • Stop the ’tiledatamodelsvc’ Service (ensure it has *actually* stopped)
  • Stop the ‘staterepository’ Service (ensure it has *actually* stopped)
  • Prevent both services from starting again by modifying the ImagePath
  • Run Sysprep normally
  • Then restore the ImagePath for both services

Code:

I have updated my private LTISysprep.wsf script for MDT 2013 Update 1 (Preview) here:

http://mdtex.codeplex.com/SourceControl/latest#Templates.2013.u1/Distribution/Scripts/LTISysprep.wsf

One of the cool things about Codeplex, is that you can compare it with the previous version to see what I changed. :^)

Note that this fix should only be temporary, and it is my intention to delete this post in the near future when fixed, hopefully before RTM. :^) Welcome to the fun of Beta Software.