A replacement for SCCM Add-CMDeviceCollectionDirectMembershipRule PowerShell cmdlet

TL;DR – The native Add-CMDeviceCollectionDirectMembershipRule PowerShell cmdlet sucks for adding more than 100 devices, use this replacement script instead.

How fast is good enough? When is the default, too slow?

I guess most of us have been spoiled with modern machines: Quad Xeon Procesors, couple hundred GB of ram, NVME cache drives, and Petabytes of storage at our command.

And don’t get me started with modern database indexing, you want to know what the average annual rainfall on the Spanish Plains are? If I don’t get 2 million responses within a half a second, I’ll be surprised, My Fair Lady.

But sometimes as a developer we need to account for actual performance, we can’t just use the default process and expect it to work in all scenarios to scale.


Been working on a ConfigMgr project in an environment with a machine count well over ~300,000 devices. And we were prototyping a project that involved creating Device Collections and adding computers to the Collections using Direct Membership Rules.

Our design phase was complete, when one of our engineers mentioned that Direct Memberships are generally not optimal at scale. We figured that during the lifecycle of our project we might need to add 5000 arbitrary devices to a collection. What would happen then?

My colleague pointed to this article: http://rzander.azurewebsites.net/collection-scenarios Which discussed some of the pitfalls of Direct Memberships, but didn’t go into the details of why, or discuss what the optimal solution would be for our scenario.

I went to our NWSCUG meeting last week, and there was a knowledgeable Microsoft fella there so I asked him during Lunch. He mentioned that there were no on-going performance problems with Direct Membership collections, however there might be some performance issues when creating/adding to the collection, especially within the Console (Load up the large collection in memory, then add a single device, whew!). He recommended, of course, running our own performance analysis, to find out what worked for us.

OK, so the hard way…

The Test environment

So off to my Standard home SCCM test environment: I’m using the ever efficient Microsoft 365 Powered Device Lab Kit. It’s a bit big, 50GB, but once downloaded, I’ll have a fully functional SCCM Lab environment with a Domain Controller, MDT server, and a SCCM Server, all running within a Virtual Environment, within Seconds!

My test box is an old Intel Motherboard circa 2011, with a i7-3930k processor, 32GB of ram, and running all Virtual Machines running off a Intel 750 Series NVME SSD Drive!

First step was to create 5000 Fake computers. That was fairly easy with a CSV file and the SCCM PowerShell cmdlet Import-CMComputerInformation.  Done!

Using the native ConfigMgr PowerShell cmdlets

OK, lets write a script to create a new Direct Membership rule in ConfigMgr, and write some Device Objects to the Collection.

Unfortunately the native Add-CMDeviceCollectionDirectMembershipRule cmdlet, doesn’t support adding devices using a pipe, and won’t let us add more than one Device at a time. Gee… I wonder if *that* will affect performance. Query the Collection, add a single device, and write back to the server, for each device added. Hum….

Well the performance numbers weren’t good:

Items to add Number of Seconds to add all items
5 4.9
50 53

As you can see the number of seconds increased proportionally to the number of items added. If I wanted to add 5000 items, were talking about 5000 seconds, or an hour and a half. Um… no.

In fact a bit of decompiling of the native function in CM suggests that it’s not really designed for scale, best for adding only one device at a time.


The WMI way

I decided to see if we could write a functional replacement to the Add-CMDeviceCollectionDirectMembershipRule cmdlet that made WMI calls instead.

I copied some code from Kadio on http://cm12sdk.net (sorry the site is down at the moment), and tried playing around with the function.

Turns out that the SMS_Collection WMI collection has  AddMembershipRule() <Singular> and a AddMembershipRules() <multiple> function. Hey, Adding more than once one device at a time sounds… better!

<Insert several hours of coding pain here>

And finally got something that I think works pretty well:

Performance numbers look much better:

Items to add Number of Seconds to add all items
5 1.1
50 1.62
500 8.06
5000 61.65

Takes about the same amount of time to add 5000 devices using my function as it takes to add 50 devices using the native CM function. Additionally some code testing suggests that about half of the time for each group is being performed creating each rule ( the process {} block ), and the remaining half in the call to AddMembershipRules(), my guess is that should be better for our production CM environment.

Note that this isn’t just a PowerShell Function, it’s operating like a PowerShell Cmdlet. The function will accept objects from the pipeline and process them as they arrive, as quickly as Get-CMDevice can feed them through the pipeline.

However more testing continues.






New Tool – Disk Hogs

Edit: Heavily modified script for speed. Bulk of script is now running Compiled C# Code.

Been resolving some problems at work lately with respect to full disks. One of our charters is to manage the ConfigMgr cache sizes on each machine to ensure that the packages we need to get replicated, actually get replicated out to the right machines at the right time.

But we’ve been getting some feedback about one 3rd party SCCM caching tool failing in some scenarios. Was it really the 3rd party tool failing, or some other factor?

Well we looked at the problem and found:

  • Machines with a modest 120GB SSD Drive (most machines have a more robust 250GB SSD)
  • Configuration Manager Application Install packages that are around 10-5GB (yowza!)
  • Users who leave too much… crap laying around their desktop.
  • And several other factors that have contributed to disks getting full.

Golly, when I try to install an application package that requires 12GB to install, and there is only 10GB free, it fails.

Um… yea…

I wanted to get some data for machines that are full: What is using up the disk space? But it’s a little painful searching around a disk for directories that are larger than they should be.


One of my favorite tools is “WinDirStat” which produces a great graphical representation of a disk, allowing you to visualize what directories are taking up the most space, and which files are the largest.  http://windirstat.net

Additionally I also like the “du.exe” tool from SysInternals.  https://live.sysinternals.com/du.exe

I wrap it up in a custom batch script file

@%~dps0du.exe -l 1 -q -accepteula %*

and it produces output that looks like:

PS C:\Users> dudir
    263,122 C:\Users\Administrator
      1,541 C:\Users\Default
  7,473,508 C:\Users\keith
      4,173 C:\Users\Public
  7,742,345 C:\Users
Files: 27330
Directories: 5703
Size: 7,928,161,747 bytes
Size on disk: 7,913,269,465 bytes

Cool, however, I wanted something that I could run remotely, and that would give me just the most interesting directories, say everything over 1GB, or something configurable like that.

So a tool was born.


The script will enumerate through all files on a local machine and return the totals. Along the way we can add in rules to “Group” interesting directories and output the results.

So, say we want to know if there are any folders under “c:\program files (x86)\Adobe\*” that are larger than 1GB. For the most part, we don’t care about Adobe Reader, since it’s under 1GB, but everything else would be interesting. Stuff like that.

We have a default set of rules built into the script, but you can pass a new set of rules into the script using a *.csv file ( I use excel )

Folder SizeMB
c:\* 500
C:\$Recycle.Bin 100
c:\Program Files 0
C:\Program Files\* 1000
C:\Program Files (x86) 0
C:\Program Files (x86)\Adobe\* 1000
C:\Program Files (x86)\* 1000
C:\ProgramData\* 1000
C:\ProgramData 0
C:\Windows 0
C:\Windows\* 1000
c:\users 0
C:\Users\* 100
C:\Users\*\* 500
C:\Users\*\AppData\Local\Microsoft\* 1000
C:\Users\*\AppData\Local\* 400

Example output:

The machine isn’t too interesting (it’s my home machine not work machine)

I’m still looking into tweaks and other things to modify in the rules to make the output more interesting.

  • Should I exclude \windows\System32 directories under X size?
  • etc…

If you have feedback, let me know


Bypass OEM Setup and install your own image.


Really Windows Autopilot is the future. As soon as the OEM’s get their act together, and offer machines without the bloatware and adware. Yea, I’m talking about you Anti-Virus Trial! Go away, shoo! Shoo! Give me Signature Images, or I’ll do it myself.

Unfortunately, I’m currently working for a client that is “Cloud Adverse”, and very… particular about Security. “have our machines go through the internet, and download our apps from a cloud, oh heavens no!!”.

So all machines come from the OEM’s and into a centralized distribution center, where they run a hodge-podge of OS Imaging tools to get the machines ready to ship out to each user.

And, No they don’t use any MDT… at least not yet…

Really it’s the Anti AutoPilot…

Where to start.

Well, when the machines arrive from the OEM, they are unboxed and placed on a configuration rack. If they are Desktop Machines, they are also connected to a KVM switch (Imagine several 8-port switches daisy chained together). Then they are plugged into power, network, and turned on.

Here’s our first challenge: How do we stop the PC from booting into the OEM’s OOBE process into OUR process? Well right now the technicians need to press the magic function key press at just the right time during boot up.

You know the drill, Press F12 for Dell, or perhaps press F9 for HP, or Press enter for Lenovo. Perhaps you have a Surface Device, and need to hold down the Volume button while starting the machine. Yuck, but better than nothing…

Well, the feedback we got from the technicians is that sometimes they miss pressing the button… at “just” the right time. This is really a problem for a Desktop PC’s connected to that KVM switch. If the Monitor doesn’t sync to the new PC quickly enough, you might easily miss pressing the boot override switch.

This sounded like a good challenge to start with.

Audit Mode

Really, IT departments don’t use Audit Mode. Audit Mode is a way to make customizations *during* Windows Setup and then re-seal the OS, so the end-user gets the nice shiny Windows Setup process (Specialize and OOBE) that they expect in a new PC.

Deployments in IT are all about bypassing the shiny Windows OOBE experience. No we don’t care about all the fancy new features in Cortana, We have already signed the SA agreement with Microsoft, we already know the domain to connect to, and our company has only one locale and keyboard type. IT departments would much rather skip all that, and get the user to their machine. So the thought of re-sealing a machine and going *back* to OOBE when we just finished joining to the domain and installing apps is silly.

But there are some Possibilities here. Turns out, that when Windows Setup is running, it will look for an Unattend.xml file and try to use it.

Methods for running Windows Setup

MDT uses an Unattend.xml file on the local machine it we can skip over the settings we know about, and re-launch MDT LiteTouch when finished. What about this process? If we place the Unattend.xml file on the root of a removable USB drive, the Windows version on the hard disk will look there and use these settings. The Lab Techs appeared to have a lot of USB sticks laying around, so using them shouldn’t be a problem.

We can’t use a MDT unattend.xml file as-is, but we can use AuditMode to get to a command prompt and install our own MDT LitetouchPE_x64.wim file.

  1. Boot into Audit Mode.
  2. While in Audit Mode, auto login using the Administrator Account.
  3. Find our PowerShell script and run it!

PowerShell script

Once we are in PowerShell, we now have full access to the system, and can modify it in any we choose. In this case, I have copied a LiteTouchPE_x64.wim file to the USB Stick, and we can force the Hard Drive to boot from that instead, continuing our process in MDT LiteTouch. Yea!

Now we have a bridge between the OEM system and our LiteTouch, or any other automated WinPE disk.

Yea! Now for the *REAL* automation to begin… 🙂



ZTISelectBootDisk.wsf new with BusType

Several years ago I wrote a script to help select which disk to deploy Windows to during your MDT LiteTouch or ZeroTouch task sequence.


Well, based on a request from my latest client, I have created a similar script that support BusType.


My client is trying to install Windows Server 2016 on a Server with a SAN. When the machine boots to WinPE, one of the SAN drives appears *first* as Disk 0 (Zero). By default MDT Task Sequences will deploy to Disk Zero! My ZTISelectBootDisk.wsf already shows how to override. All we need to do is to find a way to tell MDT which disk to choose based on the correct WMI query.

Turns out it was harder than I thought.

What we wanted was the BusType that appears in the “Type” field when you type “Select Disk X” and then “detail disk” in Diskpart.exe.  When we ran “Detail Disk” in DIskpart.exe we could see the bus type: Fibre as compared to regular disks like SCSI or SAS.

The challenge was that the regular Win32_diskDrive WMI query wasn’t returning the BusType value, and we couldn’t figure out how to get that data through other queries.

I tried running some PowerShell queries like ‘Get-Disk’ and noticed that the output type was MSFT_Disk, from a weird WMI Namespace: root\microsoft\windows\storage. But adding that query to the script works! Yea!!!


What kind of BusTypes are there?

Name Value Meaning
Unknown 0 The bus type is unknown.
1394 4 IEEE 1394
Fibre Channel 6 Fibre Channel
SAS 10 Serial Attached SCSI (SAS)
SATA 11 Serial ATA (SATA)
SD 12 Secure Digital (SD)
MMC 13 Multimedia Card (MMC)
Virtual 14 This value is reserved for system use.
File Backed Virtual  15 File-Backed Virtual
Storage Spaces  16 Storage spaces
NVMe 17 NVMe

For this script we are *excluding* the following devices:

Name Value Meaning
Fibre Channel 6 Fibre Channel
Storage Spaces  16 Storage spaces
NVMe 17 NVMe

Meaning that the *FIRST* fixed device not in this list will become the new *Target* OS Disk. Run this query on your machine to see what disk will become the target:

gwmi -namespace root\microsoft\windows\storage -query 'select Number,Size,BusType,Model from MSFT_Disk where BusType <> 6 and BusTy
pe <> 9 and BusType <> 16 and BusType <> 17' | Select -first 1


Reminder that this script requires MDT (latest), and the script should be placed in the %DeploymentShare%\Scripts folder. Additionally you should install all the Storage packages for WinPE, sorry I don’t recall *which* packages I selected when I did testing.





New script – Import Machine Objects from Hyper-V into ConfigMgr

Quick Post, been doing a lot of ConfigMgr OSD Deployments lately, with a lot of Hyper-V test hosts.

For my test hosts, I’ve been creating Machine Objects in ConfigMgr by manually entering them in one at a time (yuck). So I was wondering what the process is for entering in Machine Objects via PowerShell.

Additionally, I was curious how to inject variables into the Machine Object that could be used later on in the deployment Process, in this case a Role.

Up next, how to extract this information from VMWare <meh>.

Decrypting the HP BIOSConfigUtility64.exe password

First off, for those of you who asked, I did leave 1E last week, off to try new things. Thought I’d take some time off during May, and then look for new opportunities. The future is wide open :^)

In the mean time, I did come across this post today about decrypting passwords with the BIOSConfigUtility64.exe utility.


And reminded me of some work I did recently at 1E on their BIOS 2 UEFI toolset. The HP tool did set off some red flags for me, the tool was able to take a plaintext password and put the encrypted value in a *.bin file without asking for an encryption key!?!? How did it do that? I figured it was because the tool stored the encryption key for the password within the executable itself. That’s not optimal (it’s unsecure), but not uncommon.

So I spent some time trying to reverse engineer the process, “link -dump /imports” revealed several cryptographic calls like CryptImportKey(), and with some time using trusty WinDBG.exe, I was able to locate the key. I wondered if HP would try to hide the key, but it was there in plain sight. I wrote a test program, but in my case I didn’t write a tool to decrypt the password, I wrote a tool to perform the encryption.  My thoughts were to provide a centralized password storage mechanism for HP, Dell and Lenovo, and I wondered if the HP toolset was a good starting point. 1E didn’t go down that route but the reverse engineering the HP tool was interesting.

It does bring up an interesting ethical dilemma for Engineers like myself. Something that you see on the news whenever someone discovers a new vulnerability in an OS, or Web Browser: Do you keep the vulnerability secret, or do you tell the public, knowing that some Black Hat could use the vulnerability to write an exploit and expose sensitive data?  In my case, I decided not to reveal the exploit, but Duarte chose to reveal it. Who is right?

I guess the only thing I can say is this is a good learning opportunity to discuss security/encryption of secrets like passwords. If you are an IT administrator, be weary of tools that can encrypt data magically without an encryption key, it’s not magic, they are encrypting the data with *something* and it’s possible the key is stored locally in an unsecure fashion. Better to store the passwords in a more secure location, with a more secure encryption, and then convert to the HP Password at the “Last-Minute” when preforming the actual task sequence.

This is a similar problem with other tools that “appear” to encrypt data, but store the data in a manner that *can* be easily extracted. Like the AdministratorPassword field in the unattend.xml file, and the Password in the SysInternals AutoLogon.exe tool. Both tools store passwords in an easily recoverable fashion. I’ve seen too many administrators fall for the illusion that these are secure, they are not.


Hope to see many of you at MMS MOA in May 2017!!! I’ll be there.

Formatting a removable USB drive with 2 partitions

TL;DR – Starting with Windows 10 Insider Preview Build 14965, you can format any “Removable” USB Flash Drive with more than one partition. Perfect for installation of large (over 4GB) WIM files on UEFI machines!


Hey all, back from a week at the Microsoft MVP summit, a Week in the UK, and a week in Arizona.

A few weeks ago at the Microsoft MVP summit, an engineering manager with the Windows Product group made an offhand comment about formatting a removable USB drive with two partitions. This took several of us by surprise, because historically, this hasn’t been supported widely without converting to a Fixed disk or something.

Mike Terrill (and Mike Niehaus) already beat me to the punch with some posts, but I wanted to share my results. :^)

The Background

Why is this important? Well as I mentioned in another blog post, as more and more people are booting to UEFI, on USB flash drives formatted with Fat32, with WIM images over 4GB in size, that causes a problem because Fat32 can’t hold files over 4GB in size.

Another solution would be to use the Rufus tool to split a USB drive into multiple partitions with a hidden fat32 partition. However, the problem here is that the hidden partition uses a special UEFI app that is not signed, so it won’t work on UEFI machines with Secure Boot enabled.

This has become even more interesting since Windows Server 2016 came out, with a base WIM image for standard Server SKU that is over 4GB in size. Hum…

The Hardware


I tested on several different USB makes using my Windows 10 (version 1607) laptop. Some would allow me to create a 2nd partition on a removable Flash Drive, others would not giving me an error:

DISKPART> create part pri

No usable free extent could be found. It may be that there is insufficient 
free space to create a partition at the specified size and offset. Specify
different size and offset values or don't specify either to create the maximum 
sized partition. It may be that the disk is partitioned using the MBR disk
partitioning format and the disk contains either 4 primary partitions, (no
more partitions may be created), or 3 primary partitions and one extended
partition, (only logical drives may be created).

Mostly the older and/or cheaper drives didn’t work, but most of the newer and/or name brand drives did work.

Finally I narrowed it down to two different models, both my favorites:

Then I tested against three Operating Systems: Windows 10 Version 1607, Windows 10 Preview, and Windwos 7.0 SP1. All using Diskpart to create multiple partitions.

The script

Diskpart.exe –>

sel disk 1
create part pri size=450
format quick fs=fat32
create part pri
format quick fs=ntfs

The Results:

                                 SanDisk           Transcend
Windows 7 SP1 Build 7601           Pass               Fail
Windows 10  Version 1607           Pass               Fail
Windows 10 Preview 14965           Pass               Pass   

I was able to format my SanDisk into multiple partitions using Windows 7 and beyond.

But I was not able to format the Transcend drive into multiple partitions using Windows 7 or Windows 10 Version 1607, but I was able to partition into multiple partitions on the new Windows 10 Insider Preview 14965.

That’s new!

I haven’t done enough testing using the removable flash drives on older machines, to see if the partitions are still visible, but the results look promising for a start.

Update #1 – 11/28/16:

Found out today that the reason that my SanDisk Extreme disk worked on Windows 7 and Windows 10 1607 may be because the removable Flash disk is reported as “Fixed” rather than “Removable” to the OS. Link.

Update #2 – 11/28/16:

I noticed that when taking the “removable” disk formatted with 2 partitions from Windows 10 Preview 14965 over to Windows 10 Version 1607, only the first partition was visible. As a work around I tried moving the main NTFS partition first and the Fat32 partition second.

sel disk 1
create part pri
shrink desired=450
format quick fs=ntfs
create part pri
format quick fs=fat32