Dell XPS 13 9360 Hardware Reset

TL;DR – If you are having some spontaneous errors start your laptop try disconnecting your batteries for an hour, and try again.

XPS 13 9360

Got a new laptop last month, it was time to replace the old one. Did some searching online and found something light, powerful, and at a good price. Dell XPS 13 9360:

  •  8th Gen Intel® Core i7-8550U  (*Quad Core*)
  • 512GB PCIe Solid Drive Drive (*NVMe Drive*)
  • 16GB LPDDR3 1866MHz RAM
  • 1x Thunderbolt port
  • 13.3″ Touchscreen InfinityEdge QHD+ (3200 x 1800) Display


On sale at Costco for $1400. Overall a good value for a quad core laptop with NVMe.

The Break

Came back from a meeting (Starbucks? :)) Friday and the machine failed to boot. Got some display errors, rebooted, but got the recovery screen. So I shutdown for a while, when I rebooted, nothing. No Screen nothing.

However I did notice that the LED on the front was blinking, and I was able to catch the pattern, 2 and 7. Looking up in the service manual:


LCD Error!?!?! Crap.

A call to Dell Support confirmed the error, and a RMA ticket was generated, it could be two weeks before I get it back.


I wanted to archive the contents of the Disk before I sent it off to dell, so I got out my Torx screw driver.

But while I had the case open I disconnected the main Battery and the CMOS battery.

With most modern PC’s each of the components have small computers built in them. If they develop errors, do they reboot like the main OS when the power is off? If the battery is always connected that might not be true.  I had a similar problems recently with my SuperMicro test box, where flashing the BIOS wasn’t helping to resolve a complex problem I had with the box. Draining the CMOS battery and re-flashing the BIOS did work!

After an hour, I plugged in the batteries, and tried booting again. Yea, the machine works! It’s alive! I don’t have to send my machine in for repair.

Hopefully the machine will work a little bit longer than 45 days. We’ll know soon.



A replacement for SCCM Add-CMDeviceCollectionDirectMembershipRule PowerShell cmdlet

TL;DR – The native Add-CMDeviceCollectionDirectMembershipRule PowerShell cmdlet sucks for adding more than 100 devices, use this replacement script instead.

How fast is good enough? When is the default, too slow?

I guess most of us have been spoiled with modern machines: Quad Xeon Procesors, couple hundred GB of ram, NVME cache drives, and Petabytes of storage at our command.

And don’t get me started with modern database indexing, you want to know what the average annual rainfall on the Spanish Plains are? If I don’t get 2 million responses within a half a second, I’ll be surprised, My Fair Lady.

But sometimes as a developer we need to account for actual performance, we can’t just use the default process and expect it to work in all scenarios to scale.


Been working on a ConfigMgr project in an environment with a machine count well over ~300,000 devices. And we were prototyping a project that involved creating Device Collections and adding computers to the Collections using Direct Membership Rules.

Our design phase was complete, when one of our engineers mentioned that Direct Memberships are generally not optimal at scale. We figured that during the lifecycle of our project we might need to add 5000 arbitrary devices to a collection. What would happen then?

My colleague pointed to this article: Which discussed some of the pitfalls of Direct Memberships, but didn’t go into the details of why, or discuss what the optimal solution would be for our scenario.

I went to our NWSCUG meeting last week, and there was a knowledgeable Microsoft fella there so I asked him during Lunch. He mentioned that there were no on-going performance problems with Direct Membership collections, however there might be some performance issues when creating/adding to the collection, especially within the Console (Load up the large collection in memory, then add a single device, whew!). He recommended, of course, running our own performance analysis, to find out what worked for us.

OK, so the hard way…

The Test environment

So off to my Standard home SCCM test environment: I’m using the ever efficient Microsoft 365 Powered Device Lab Kit. It’s a bit big, 50GB, but once downloaded, I’ll have a fully functional SCCM Lab environment with a Domain Controller, MDT server, and a SCCM Server, all running within a Virtual Environment, within Seconds!

My test box is an old Intel Motherboard circa 2011, with a i7-3930k processor, 32GB of ram, and running all Virtual Machines running off a Intel 750 Series NVME SSD Drive!

First step was to create 5000 Fake computers. That was fairly easy with a CSV file and the SCCM PowerShell cmdlet Import-CMComputerInformation.  Done!

Using the native ConfigMgr PowerShell cmdlets

OK, lets write a script to create a new Direct Membership rule in ConfigMgr, and write some Device Objects to the Collection.

Unfortunately the native Add-CMDeviceCollectionDirectMembershipRule cmdlet, doesn’t support adding devices using a pipe, and won’t let us add more than one Device at a time. Gee… I wonder if *that* will affect performance. Query the Collection, add a single device, and write back to the server, for each device added. Hum….

Well the performance numbers weren’t good:

Items to add Number of Seconds to add all items
5 4.9
50 53

As you can see the number of seconds increased proportionally to the number of items added. If I wanted to add 5000 items, were talking about 5000 seconds, or an hour and a half. Um… no.

In fact a bit of decompiling of the native function in CM suggests that it’s not really designed for scale, best for adding only one device at a time.


The WMI way

I decided to see if we could write a functional replacement to the Add-CMDeviceCollectionDirectMembershipRule cmdlet that made WMI calls instead.

I copied some code from Kadio on (sorry the site is down at the moment), and tried playing around with the function.

Turns out that the SMS_Collection WMI collection has  AddMembershipRule() <Singular> and a AddMembershipRules() <multiple> function. Hey, Adding more than once one device at a time sounds… better!

<Insert several hours of coding pain here>

And finally got something that I think works pretty well:

Performance numbers look much better:

Items to add Number of Seconds to add all items
5 1.1
50 1.62
500 8.06
5000 61.65

Takes about the same amount of time to add 5000 devices using my function as it takes to add 50 devices using the native CM function. Additionally some code testing suggests that about half of the time for each group is being performed creating each rule ( the process {} block ), and the remaining half in the call to AddMembershipRules(), my guess is that should be better for our production CM environment.

Note that this isn’t just a PowerShell Function, it’s operating like a PowerShell Cmdlet. The function will accept objects from the pipeline and process them as they arrive, as quickly as Get-CMDevice can feed them through the pipeline.

However more testing continues.





New Tool – Disk Hogs

Edit: Heavily modified script for speed. Bulk of script is now running Compiled C# Code.

Been resolving some problems at work lately with respect to full disks. One of our charters is to manage the ConfigMgr cache sizes on each machine to ensure that the packages we need to get replicated, actually get replicated out to the right machines at the right time.

But we’ve been getting some feedback about one 3rd party SCCM caching tool failing in some scenarios. Was it really the 3rd party tool failing, or some other factor?

Well we looked at the problem and found:

  • Machines with a modest 120GB SSD Drive (most machines have a more robust 250GB SSD)
  • Configuration Manager Application Install packages that are around 10-5GB (yowza!)
  • Users who leave too much… crap laying around their desktop.
  • And several other factors that have contributed to disks getting full.

Golly, when I try to install an application package that requires 12GB to install, and there is only 10GB free, it fails.

Um… yea…

I wanted to get some data for machines that are full: What is using up the disk space? But it’s a little painful searching around a disk for directories that are larger than they should be.


One of my favorite tools is “WinDirStat” which produces a great graphical representation of a disk, allowing you to visualize what directories are taking up the most space, and which files are the largest.

Additionally I also like the “du.exe” tool from SysInternals.

I wrap it up in a custom batch script file

@%~dps0du.exe -l 1 -q -accepteula %*

and it produces output that looks like:

PS C:\Users> dudir
    263,122 C:\Users\Administrator
      1,541 C:\Users\Default
  7,473,508 C:\Users\keith
      4,173 C:\Users\Public
  7,742,345 C:\Users
Files: 27330
Directories: 5703
Size: 7,928,161,747 bytes
Size on disk: 7,913,269,465 bytes

Cool, however, I wanted something that I could run remotely, and that would give me just the most interesting directories, say everything over 1GB, or something configurable like that.

So a tool was born.


The script will enumerate through all files on a local machine and return the totals. Along the way we can add in rules to “Group” interesting directories and output the results.

So, say we want to know if there are any folders under “c:\program files (x86)\Adobe\*” that are larger than 1GB. For the most part, we don’t care about Adobe Reader, since it’s under 1GB, but everything else would be interesting. Stuff like that.

We have a default set of rules built into the script, but you can pass a new set of rules into the script using a *.csv file ( I use excel )

Folder SizeMB
c:\* 500
C:\$Recycle.Bin 100
c:\Program Files 0
C:\Program Files\* 1000
C:\Program Files (x86) 0
C:\Program Files (x86)\Adobe\* 1000
C:\Program Files (x86)\* 1000
C:\ProgramData\* 1000
C:\ProgramData 0
C:\Windows 0
C:\Windows\* 1000
c:\users 0
C:\Users\* 100
C:\Users\*\* 500
C:\Users\*\AppData\Local\Microsoft\* 1000
C:\Users\*\AppData\Local\* 400

Example output:

The machine isn’t too interesting (it’s my home machine not work machine)

I’m still looking into tweaks and other things to modify in the rules to make the output more interesting.

  • Should I exclude \windows\System32 directories under X size?
  • etc…

If you have feedback, let me know


Silence is Golden during Setup

Thanks to @gwblok for pointing me to this twitter thread about Windows OOBE Setup.

When Unattended is not Silent

During Windows 10 OOBE, the Windows Welcome process uses the Cortana voice engine to speak during Windows Setup.

Now we can go look for any updates

Shut up!

Yes, I’m one of those guys who sets my Sound Profile to “silent”, Silence is Golden!

And if I’m going to be running several Windows Deployments in my lab (read my home office), then I would prefer the machine to be silent. Reminds me of the XP/Vista days when we had boot up sounds. How rude.

So how to disable… Well the answer doesn’t appear to be that straight forwards.


At first I suggested SkipMachineOOBE, and works on my test machine! Yea!

Then I got a reminder that SkipMachineOOBE is deprecated according to documentation.


Thanks to @Jarwidmark for pointing me in the thread above to:

reg.exe add HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\OOBE /v DisableVoice /t REG_DWORD /d 1

However, Microsoft Documentation also states that you should only use this for testing, and that Cortana Voice should be-enabled for users. OK… Fine, we’ll delete the key after setup is complete.

So where to place all this stuff?


Several people suggested modifying the local registry within the imaging process, but I would prefer to avoid that, instead trying to see if we can perform the action during Setup using our unattend.xml file.

The command to disable would need to be *before* “OOBE”, sounds like the perfect job for the “Specialize” process.

Some quick testing, verified, and we are ready to go.

Automating OOBE

So, given the guidance from Microsoft on how to automate Windows 10:

Here are my changes:

  • We disable Cortana during the Specialize Pass before OOBE.
  • Then during OOBE, we clear the Cortana setting, and continue.


Bypass OEM Setup and install your own image.


Really Windows Autopilot is the future. As soon as the OEM’s get their act together, and offer machines without the bloatware and adware. Yea, I’m talking about you Anti-Virus Trial! Go away, shoo! Shoo! Give me Signature Images, or I’ll do it myself.

Unfortunately, I’m currently working for a client that is “Cloud Adverse”, and very… particular about Security. “have our machines go through the internet, and download our apps from a cloud, oh heavens no!!”.

So all machines come from the OEM’s and into a centralized distribution center, where they run a hodge-podge of OS Imaging tools to get the machines ready to ship out to each user.

And, No they don’t use any MDT… at least not yet…

Really it’s the Anti AutoPilot…

Where to start.

Well, when the machines arrive from the OEM, they are unboxed and placed on a configuration rack. If they are Desktop Machines, they are also connected to a KVM switch (Imagine several 8-port switches daisy chained together). Then they are plugged into power, network, and turned on.

Here’s our first challenge: How do we stop the PC from booting into the OEM’s OOBE process into OUR process? Well right now the technicians need to press the magic function key press at just the right time during boot up.

You know the drill, Press F12 for Dell, or perhaps press F9 for HP, or Press enter for Lenovo. Perhaps you have a Surface Device, and need to hold down the Volume button while starting the machine. Yuck, but better than nothing…

Well, the feedback we got from the technicians is that sometimes they miss pressing the button… at “just” the right time. This is really a problem for a Desktop PC’s connected to that KVM switch. If the Monitor doesn’t sync to the new PC quickly enough, you might easily miss pressing the boot override switch.

This sounded like a good challenge to start with.

Audit Mode

Really, IT departments don’t use Audit Mode. Audit Mode is a way to make customizations *during* Windows Setup and then re-seal the OS, so the end-user gets the nice shiny Windows Setup process (Specialize and OOBE) that they expect in a new PC.

Deployments in IT are all about bypassing the shiny Windows OOBE experience. No we don’t care about all the fancy new features in Cortana, We have already signed the SA agreement with Microsoft, we already know the domain to connect to, and our company has only one locale and keyboard type. IT departments would much rather skip all that, and get the user to their machine. So the thought of re-sealing a machine and going *back* to OOBE when we just finished joining to the domain and installing apps is silly.

But there are some Possibilities here. Turns out, that when Windows Setup is running, it will look for an Unattend.xml file and try to use it.

Methods for running Windows Setup

MDT uses an Unattend.xml file on the local machine it we can skip over the settings we know about, and re-launch MDT LiteTouch when finished. What about this process? If we place the Unattend.xml file on the root of a removable USB drive, the Windows version on the hard disk will look there and use these settings. The Lab Techs appeared to have a lot of USB sticks laying around, so using them shouldn’t be a problem.

We can’t use a MDT unattend.xml file as-is, but we can use AuditMode to get to a command prompt and install our own MDT LitetouchPE_x64.wim file.

  1. Boot into Audit Mode.
  2. While in Audit Mode, auto login using the Administrator Account.
  3. Find our PowerShell script and run it!

PowerShell script

Once we are in PowerShell, we now have full access to the system, and can modify it in any we choose. In this case, I have copied a LiteTouchPE_x64.wim file to the USB Stick, and we can force the Hard Drive to boot from that instead, continuing our process in MDT LiteTouch. Yea!

Now we have a bridge between the OEM system and our LiteTouch, or any other automated WinPE disk.

Yea! Now for the *REAL* automation to begin… 🙂



Windows 10 In-Place Security Considerations

TL;DR – When performing a Windows 10 In-Place upgrade, you must temporarily suspend any Disk Encryption protections, for BitLocker *AND* 3rd party disk encryption too!

In Place Upgrade

So, how do we upgrade an Operating System? You know, the one we are currently using? Can we still upgrade it while still in use? Unfortunately, no. The Windows 10 In-Place process is very complex, and it requires full access to all the files on the machine. So how do we do that? Well, the upgrade process needs to shift to another OS, just temporarily, to modify the OS on our C:\ drive, we can use WinPE (Windows Pre-Installation Environment), or in this case WinRE (Windows Recovery Environment).

WinPE and WinRE are lightweight OS’es that are contained in a compressed boot.wim file, about 300MB to 500MB in size, and placed somewhere on the disk. We can boot into WinPE/RE and have it completely reside in memory. Now we have full access to the C:\ drive on the machine, and we can move files around and including a new OS.


3rd Party Drivers

One of the challenges of shifting to a separate OS like WinPE/WinRE is that we’ll need to re-load any drivers required to access the system, including Disk and File System Drivers. For the most part, the latest versions of WinPE/WinRE will have very excellent support for the latest disk controller drivers. And it’s very rare that I’ve had needed to supply 3rd party drivers for mainstream hardware. Starting with Windows 10 1607, Microsoft gives us the ability to add 3rd party Drivers to the WinPE/WinRE using the /ReflectDrivers switch. This includes the ability to supply drivers for a Storage Controller or even a 3rd party Disk Encryption tool. Anything that is required to access the machine.

Suspending Encryption

First some background…


At my house I have a Lock Box like this. I can place my house key in the box, and if someone needs to get into the house, I can just give them the code to the lock box. This is much better than giving everyone their own key, or just leaving the main door unlocked while I’m out. If I want to revoke access, I just change the code on the lock box, rather than re-keying my whole house.

If you have an OS disk that is encrypted, and you want to upgrade the OS, you probably don’t want to decrypt the ENTIRE disk before the OS upgrade, and re-encrypt the disk when the new OS is ready, that would take time to read and write data to the entire disk. Instead it would be better if we could leave the disk encrypted, and just temporarily give the setup system full access. It’s similar to the Lock box analogy above, we don’t want to give everyone access to the main encryption key, but the system will allow access at the right time to the right users.

For Microsoft BitLocker, the process is called “suspending”. We leave the disk encrypted, but the encryption keys for the disk are no longer protected. When the new OS is installed, we can re-establish protection via our usual protectors like TPM, SmartCard, Password, etc…

3rd party encryption products need to function in the same way.  We would like to leave the disk encrypted, but any protections like “Pre-Boot authentication”, should be disabled, so the WinPE/WinRE Operating System, with the correct Encryption filter drivers have full access to the main OS. When finished, we can re-establish any Pre-Boot authentication protections supported by the encryption software like Passwords, TPM chips, Smart Cards, etc…  If the 3rd party disk encryption product does not offer this then the WinPE/WinRE OS won’t be able to access the local C:\ Drive.


I’ve been working with a client lately whose security team has correctly identified the In-Place Upgrade-Suspending Encryption behavior I described above. However, they incorrectly prescribe this as a vulnerability of BitLocker, and have not acknowledged that it is also a vulnerability of other 3rd party disk encryption tools.

First off, yes, this is a known security Vulnerability in the way Windows 10 handles In-Place Upgrades, we simply must temporarily suspend protections as we move off to offline OS, this is by design. More below…

Secondly, It’s disingenuous to claim that this is only a BitLocker problem, by the design of the current Windows 10 In-Place upgrade system with the /ReflectDrivers hook, 3rd party disk encryption tools must also suspend protections so the WinPE/WinRE offline OS’es.

This is really important for fully automated In-Place upgrade scenarios like MDT Litetouch or System Center Configuration Manger (SCCM) OSD (Operating System Deployment) tools.


Well, it’s not all gloom and doom, It’s not perfect, but like most things related to security, there are compromises, and tradeoffs.

Note that your data at-rest, protected by encryption, is only one potential threat vector where bad guys can get your data. There is also Malware, OS bugs, and other vectors that are made more secure with the latest Windows Releases. It *IS* important to keep your machine up to date and healthy with the latest OS and security tools, and simply avoiding upgrades because you don’t want to expose your machine, isn’t the best solution.

But there are also techniques/mitigations we can do to limit the exposure of your data during In-Place Upgrades. You will, of course, need to perform your own threat analysis. Some ideas might be:

  • Don’t allow Upgrades to be performed in an automated fashion, always run attended. (not possible in some large environments).
  • Only allow Upgrades to be performed on site, in semi-secured environments. Never over VPN or Wi-FI
  • If you are running in a SCCM environment, we could develop some scripts/tools to monitor Upgrades. If a machine hasn’t returned from In-Place upgrade after XX minutes, then auto-open a Support Ticket, and immediately dispatch a tech.


Install Windows 10 on Surface 1TB with MDT

TL;DR – Here is a script to get past the Disk0 to Disk2 mapping issue with the new Surface Pro with a 1TB drive.

Surface Hardware

OK, first a bit of history, I used to work for the Surface Imaging team back in 2015 to 2016. Overall a cool job, I learned a lot, and got to sharpen my PowerShell coding skills.

During that time I got to see my first Surface Studio device, even before it was released. Once of the unique features of the device was it’s unique disk architecture, it contains TWO disk drives, one a SSD in a M.2 format, and a Spinning Hard disk in a 2.5″ format. The OS contains a driver that uses the SSD as a cache. The idea is that you get the size of the 2TB hard disk, with (generally) the speed of the SSD.

Of course this creates a problem for OS deployment because we need to load the Special Caching driver into WinPE before deployment, so both drives are properly identified.

The Surface Pro with the 1TB drive is also unique in this manner, on the inside it isn’t a single 1TB drive, instead it’s two 512GB drives running in a Raid 0 configuration.

So you’re probably wondering how this works within WinPE on MDT, well the good news is that the built in 1709 drivers can correctly identify the two SSD disk as a single 1TB drive…

… The only problem is that it’s identified as Disk(2), and that breaks stuff.


Yes, yes, I know… mea culpa…

MDT (and SCCM/OSD) make an assumption on the “Format and Partition Disk” step: The target disk number is fixed for each Task Sequence. Now, we can change the target disk dynamically within the Task Sequence by chaning the OSDDiskIndex variable. But it will require some coding.

Fix 1

One fix, if you are OK with some WMI queries, is to test for a “Surface Pro” model and a 1TB disk at index 2. I would prefer to test for the ABSENCE of a disk at index 0, but not sure how to do that.

Fix 2

The following is a modification of my ZTISelectBootDisk.wsf script. Designed specifically for this configuration. Just drop it into the scripts folder and add a step in the Task Sequence before the “Format and Partition disk step.


Now this script has NOT been tested on a 1TB Surface device. However I *AM* available for testing the 1TB surface device. I can forward my home mailing address, if you want to send me one :^).