MDT Gotcha – Upgrading a Deployment Share

Some notes about upgrading your existing MDT deployment share to MDT 2013 Update 1.

First off, if you are upgrading a share, please backup the share first, so you have a copy that you can reference if necessary. At least backup the \Control and \Scripts folder.

Even better, make a copy of your MDT Deployment Share, and upgrade the COPY.

Finally, if you have been playing around with creating Windows 10 images, be aware if you upgrade from previous versions of the MDT 2013 Update 1 beta, you might miss some fixes in the final version that are *NOT* fixed during the upgrade.

A big problem is the Windows Version 6.0 test in the Capture Image phase of the Client Task Sequence:


This test will *fail* to apply the WinPE image to the local machine for capture. Whoops!

To fix, simply remove the “Version > 6” test on this step.

This is only a problem for Windows 10 capture task sequences.


Surface Pro 3 Portrait Dock

Had some fun working on a weekend Craft project; Creating a Landscape Dock for my Surface Pro 3.


First step was finding the right Right Angle USB 3.0 plug from Amazon.


Then I had to find the correct way to mount the plug, I had some trouble figuring this one out, but then I remembered I had an old unused MMS 2012 notebook that worked out great. It had a strong rigid cardboard cover.


A couple of slices with a razor, and a working model



Security Week – MDT LiteTouch with MBAM

Second post of my “Security Week” series…


Most of you out there who are security conscious should be aware of BitLocker, it’s the disk encryption feature built into Windows Vista and above, and a must-have for any one with a laptop that contains private information. Or for desktops for that matter.

For the past couple of years, I have only purchased business class machines that have TPM chips so I can run BitLocker in a secure fashion, and it’s nice to see more Windows 8 machines with TPM built in.

BitLocker Pre-Provisioning

One of the difficult aspects of BitLocker with Windows Vista and Windows 7 is the time it takes to actually encrypt the drive. We could launch the encryption process during the installation, and even tell our Task Sequence to wait until the encryption process is done.

With Windows 8, Microsoft introduced the concept of BitLocker Pre-Provisioning, which is the merging of two new features:

  • Disabled Protectors (Suspended)
  • Used Space only

If you have ever had to upgrade a BIOS on a machine that is BitLocker protected, then you know that you have to disable the BitLocker protectors. BitLocker protects a drive with two sets of keys. The first set encrypts the data on the drive itself. The second set encrypts the first set of keys. IF we need to disable the encryption on a drive temporarily, we don’t want to decrypt the *entire* drive, we put the first set of keys in the clear for anyone to read, then, when ready, we lock it back down with the second set, which can use a variety of methods like TPM, TPM+Pin, Smart Card, etc…

With BitLocker pre provisioning, we encrypt the contents of the drive, but put it into a suspended state. Later when the full OS is installed, we can set the protectors we want, and the drive will be fully encrypted.

With the Used Space Only switch, we can encrypt the drive with the content that has actually been written to the drive (in use). If we perform this step immediately after the “Format and Partition” step in the Task Sequence, then it should be super quick.


MDT has built-in options for enabling BitLocker, however if you have the correct licenses for the MDOP (Microsoft Desktop Optimization Pack), you might be interested in MBAM (Microsoft BitLocker Administration and Monitoring).

You can integrate MBAM with your existing Domain and/or SCCM infrastructure to push out BitLocker Policies. MBAM also does a great job of collecting recovery keys and storing them in it’s own private database for self-service retrieval if and when things go wrong.

Integrating MDT with MBAM

If you want to use MBAM with your MDT deployment process for new computers, a recommended solution is to let MDT handle BitLocker Pre-Provisioning, and let MBAM handle the process of enabling the protectors. That way you can enforce the correct BitLocker Policies with MBAM, and speed up the process by having the machines already encrypted.

The MDT LiteTouch task sequence already has the necessary steps to support BitLocker Pre-Provisioning, all we need to do is enable the Pre-Provisioning part, without letting ZTIBDE.wsf continue with the full encryption.

Do do this, we can simply Disable the second “Bitlocker” step during the “state restore Phase” set the CustomSettings.ini file with the following entries:


Setting BDEInstallSuppress to PreInstall will allow ZTIBDE.wsf to execute the Pre-Provisioning parts, but not to provision the machine.


Running Windows Server on Consumer Intel Chipsets (Z87)

Some of you may have seen my recent post on my favorite Build Machine, the SuperMicro SYS-5038D.

My other build machine is also pretty sweet, but there are some kinks with it that drive me crazy. The machine is a custom build with a ASUS Maximus VI Hero Motherboard. The machine has a Intel Core i7-4771 processor, 32GB of ram, two SSD Drives, and another HDD Drive. I load up Windows Server 2012 R2 (Eval) edition, and it works great for testing Hydration and my POC kit and other Hyper-V tasks.

It also has an on-board Intel Gigabit i217v networking card, but it doesn’t work on Server. This isn’t a technical problem.


What is a server? Are we talking about a Mainframe?

Microsoft, Linux, AMD, Intel, VMWare, and others have done very well putting Server Operating Systems, not on large Mainframes but on commodity computer equipment, scaled out!

Microsoft Windows Client and Server Operating Systems, share the same core, but have some slight changes to enable or disable features depending on the target.

If you want to put Windows Server 2012 R2 on a Laptop. You can! As long as you are in compliance with the License Microsoft is cool with that!

INF Files

Windows driver INF files (Windows 95 style), are a bit of a bastard creation. Yes, the design goes back to Windows 95, and has been upgraded, patched, and tweaked to work with modern Windows 8.1 Operating Systems. There is one little feature in the Manufacturer section that makes life difficult here.

In this section you can tell the Operating System that this INF section only supports a specific Operating System Version, and Product Type. The Product Type can be Workstation (Windows 7 and 8.1 client), or Server. Drivers should mostly be platform agnostic, but the switch is there.

Take a look at the driver for my Intel Z87 motherboard with the i217v Gigabit network driver:

Signature   = "$Windows NT$"
Class       = Net
ClassGUID   = {4d36e972-e325-11ce-bfc1-08002be10318}
Provider    = %Intel%
CatalogFile =
DriverVer   = 06/12/2014,

%Intel%     = NTamd64.6.3, NTamd64.6.3.1

%E153ANC.DeviceDesc%            = E153A.6.3.1,       PCI\VEN_8086&DEV_153A
%E153ANC.DeviceDesc%            = E153A.6.3.1,       PCI\VEN_8086&DEV_153A&SUBSYS_00008086
%E153ANC.DeviceDesc%            = E153A.6.3.1,       PCI\VEN_8086&DEV_153A&SUBSYS_00011179
%E153BNC.DeviceDesc%            = E153B.6.3.1,       PCI\VEN_8086&DEV_153B 

Note that my Dev_153B device is in the [Intel.NTamd64.6.3.1] section?

  • NTamd64 – Means that it works for 64-bit Operating Systems
  • 6.3 – means that is works for Windows 6.3 or Windows 8.1 or Windows Server 2012 R2.
  • .1 – means that it will *ONLY* work for Workstation. And that’s our problem.

Apparently someone at Intel saw the switch and said “This means we can prevent a this driver from loading on Windows Server!” The Management Types said: “Hey, this chipset is only intended for clients anyways, so let’s disable server!” (Product Differentiation) I’m also sure that somewhere in their discussions they also realized that “If we disable server, then we don’t have to test on Server either! That will save us time on our test matrix!” I’m sure that sealed it.


Good news is that most *Enterprise* class of laptops and desktops *DO* have drivers that support Server. It’s just with consumer hardware you need to be aware.

There are ways to get around loading the driver on Server machines, it typically involves modification of the inf file, and forcing the OS to accept unsigned drivers. However Having worked in WHQL, I believe strongly that you should only load signed drivers in your environments.

Therefore my fix was to install two PCIe networking cards, yes an actual network card.

Turns out that the Intel NUC has the same chipset, so it also suffers from the same problem.

Thanks Intel.

How to Video: Debug Windows Panther

Wanted to share this quick video I made illustrating how to find Windows Setup Logs (Panther Logs)

Some Key Takeaways:

  • When running Windows Setup, press Shift+10 to get to cmd.exe
  • Look for logs in c:\windows\panther
  • A copy of the unattend.xml file will be in c:\windows\panther\unattend.xml
    Double check each file to see if they have content (sometimes files listed as size Zero Bytes are actually not Zero!)
  • When asking for help on the MDT TechNet Forums, copy all the panther logs to a public site like OneDrive and share the link.

PersistAllDeviceInstalls in Hyper-V Environments

I’ve been creating some images for virtual machine environments.

One of the goals of sysprep is to take all the elements that make an operating system specific to a machine, and make it generic for other machines. However if you know that the hardware is the same, you can tell Sysprep not to strip out the installed device drivers and keep them for the next machine.

There are two ways to persist the devices and drivers when calling sysprep. If you are using Windows 8 or greater, you can add the /Mode:VM switch to the end of the sysprep call. However if you want the process to work in Windows 7 or Windows Server 2008 R2, you need to put the PersistAllDeviceInstalls element in an unattend.xml file and pass that through to sysprep.

I created an unattend.xml file and placed it in my MDT Litetouch deployment share under the Tools directory.

This particular unattend.xml file is crafted to work for both x86 and x64 platforms.

<?xml version="1.0" encoding="utf-8"?>
 <settings pass="generalize">

Within MDT LiteTouch I then can then set my CustomSettings.ini file to:



After building out some of my virtual machines, I decided to run some performance tests against the images that got the PersistAllDeviceInstalls, and those that did not.

I have a script that will convert a *.wim to a *.vhdx file, and inject a custom unattend.xml file. Import the *.vhdx file into a virtual machine and start up.

For the image that was given PersistAllDeviceInstalls, it took 1 minute 20 seconds from the start of the virtual machine to the logon prompt.

For the normal image without PersistAllDeviceInstalls, it took about 3 minutes from the start of the virtual machine to the logon prompt.

That cut our install time down by almost HALF! Pretty cool!


Next up, playing around with the VDI optimization scripts from Jeff and Carl

Creating a second large partition on a single drive machine

Been getting some questions lately about multiple partitions.

“How do I get the partition page in MDT?”
“My task sequence is perfect, except it does not work on uEFI machines?”
“Why is the system partition 499MB?”

And I wanted to write down all the reasons why multiple partitions (drives) are bad, bad, bad…

Drive C: should be as close to 100% as possible.

Multiple Partitions

What do I mean when I say multiple partitions? By default if you do nothing in Windows and MDT, these systems will create multiple partitions anyways, reserving 350MB to 499MB for system files. 128MB reserved for uEFI, and any additional partitions for Recovery. All these small partitions are hidden, and most users don’t know they are there. They are necessary for systems like Bitlocker, where the main C: OS Partition is encrypted, and can’t be booted without the boot loader on an unencrypted partition.



Note: Microsoft refers to the partition that contains the Operating System as the “Boot” partition, and the partition with the Boot loader as the “System” partition. Yea, I don’t get it either.

But beyond these smaller partitions, some users like to partition their disks down even further, for example taking a 1TB disk and allocating only 500GB for the OS C:, and the remainder as a Data D: partition.

*This*, the creation of large unhidden partitions is something I generally do not recommend. They are just more trouble than they are worth.

Note, I’m not talking about adding multiple *Disks*, just talking a single disk and making multiple partitions (Drives).


Some Points


  • What about Dual-Booting OS’es? Don’t Dual boot, put your secondary OS’es in a virtual machine. Windows 8 now has excellent Hyper-V Support. Yea!
  • Some folks believe that creating a data partition provides a performance boost. It does not, in fact on spinning disks, it can be slower. If you have a legitimate performance requirement, use a 2nd Physical disk.
  • Some folks have teams that managed the OS and teams that manage applications. They don’t like their data to mix. Can’t everyone just get along?
  • Putting your data on a separate partition is not a backup process.
  • If they really want two partitions, tell them to use Bitlocker and they’ll get an extra special hidden partition to fill the void.
  • Some faithful Dan Akroyd and Rick Moranis following folks still believe in Ghost and decide that it’s easier to refresh a machine if the user profiles are on a separate partition than the system drive. Now we have the power of MDT, USMT and ConfigMgr! However this *Breaks* all kinds of scenarios, like OS Upgrade
  • Perhaps you have a program that is hard coded to use D:\ (Bad Developers!) Use the Subst.exe tool to map C: to D:!
  • The migration could be simple and fast with USMT hardlink option. The two partitions is the bad practice which prevents us to use it in many cases. The drive size the customers usually used with XP is too small for Windows 7 and we have to re-partition. And then we lose the hardlink option…
  • All the cool kids are using single partitions. And is recommended practice of Microsoft Consulting Services.


However, the biggest problem is something I call “Partition Fragmentation” From Wikipedia:

In computer storage, fragmentation is a phenomenon in which storage space is used inefficiently, reducing capacity or performance and often both.

Many times I have come across user computers that were setup with C: OS and D: Data partitions, and one got full! Resizing is not an easy thing to do. And just when people *think* they have the correct sizes setup, their use cases change, and they don’t have enough space on the C: too much on D: or vise versa. Keep a single C: Partition, and you won’t have to manage how much free space you have on both drives, just one drive.
Rule of thumb: if you don’t have a very *clear* understating of what size the 2nd partition (D:) should be, then partition fragmentation is just waiting to happen.



Of course, for every rule… there is always an exception. One of the few really good *technical* reasons for creating multiple large partitions is for programs like “Deep Freeze” that require this configuration. If the system requires this, OK.


The MDT LiteTouch Way


By default the task sequence in MDT will create a single 100% partition for the OS C: on disk 0. If you don’t add any additional partitions to the configuration, then the MDT ZTIDiskPart.wsf script will also create the correct BIOS MBR and uEFI GPT partitions required for booting! Yea! Super easy!

If you create extra partitions in the “Format and Partition” step in MDT for disk 0, then MDT will *NOT* create any extra system partitions necessary. If you create the partitions by hand for BIOS, then it might not work for uEFI. Best to leave it as a single partition.

Also, if you are creating partitions by hand, you might forget some of the built in best practices build into MDT. You might look at the default partition configuration in Disk Management in the OS, and see that the default System Partition is 499MB, and assume that it’s a rounding error, and should be 500MB. No, it’s *actually* 499MB! Windows backup utilities have unique space requirements if the System Partition is 500MB or greater, so we specifically chose 499MB so we don’t block Windows backup. Yea, it’s the little things like that that can cause problems, and why I recommend using well-known and proven solutions.

Wizards for Disk Part

Why doesn’t MDT supply methods for dynamically (or manually) choosing or setting up the partition?

Because it’s hard!

First of all, you could modify the unattend.xml file to force the disk partition dialog to appear during WInPE setup, however, that’s not the way MDT works, by the time we call into Windows Setup, the partition(s) are already created. There is nothing to create!

I actually created about 3 or 4 mock ups of partitioning dialogs over the years, and each one had problems, from the UI look and feel, to the lag presented when calling diskpart in the background.

The closest I got was to create the disk configuration wizard page, as seen in my post How MDT Litetouch does Partitioning

As for the ability to target the specific partition, there were too many moving parts that could cause MDT to fail, we couldn’t take the risk.

To summarize: Drive C: 100% of remaining space

Image Factory Automation

One area where MDT Litetouch excels is with Image Creation. I know of several groups within Microsoft (including Microsoft Consulting) who recommend using MDT LiteTouch to create images, even if those images are eventually used within SCCM OSD (Operating System Deployment).


Back in the Windows XP days, the OS came with it’s own proprietary installation system. Say you spent some time getting XP updated to the latest Service Pack, along with all the necessary security updates, and the latest version of Office. You might want to take a snapshot (or checkpoint) of this reference disk (Image) to reload on other machines. That’s where Sysprep and 3rd party products like Ghost came to the picture.

Starting with Windows Vista, Microsoft started distributing the OS using a new file archival format: Windows Imaging Format (*.wim files). *.wim files are compressed archives with space to store extra metadata. It’s also intelligent to hold multiple archive sets within a single *.wim file, and only keep a single instance of the same file, so a single wim file can hold Windows Starter, Home Premium, and Ultimate on the same disk!

The main difference between Ghost (*.gho) files, and WIM (*.wim) files is that Ghost Files store the contents of a hard disk in block format (partitions and all), whereas the WIM files are stored as files. That means that when you apply a *.gho file to a disk of different size, Ghost itself needs to do some resizing of the partitions to make it fit. Whereas the *.wim file can only hold those files and streams it knows about (boot sectors and deleted files are ignored).

One of the coolest features of *.wim files is that Microsoft allows customers the ability to capture the contents of a Drive (volume) into your own *.wim file. In fact for some versions of Windows, you can replace the install.wim file on the Install DVD with your own captured *.wim file, and continue with the installation process like it came from Microsoft.


So how do you create your own image for use?

  • First off, you should setup a machine *just* the way you want it.
    Install the OS, Install Apps, Configure Settings, Add drivers if necessary.
  • Next, run Sysprep on the machine. This will prepare the machine to re-run OOBE Setup.
  • Finally, boot into WinPE and capture the image using imagex.exe or dism.exe into a *.wim file.

Of course this is a major oversimplification of a complex process. For example, adding Drivers to the image depends on the scenario. If you *know* for certain that this image will only be applied against a single kind of computer system you could perform the capture on that reference system so that the image contains all necessary drivers, ready to go. Otherwise, if you are targeting several kinds of hardware, I would strongly recommend using Hyper-V Virtual Machines to create your images, since the OS won’t load any extra drivers into the image.

Enter MDT LiteTouch

The MDT LiteTouch Client and Server deploy Task sequences were designed from the start to handle the full deployment installation process. OS installation, Application Installation, Sysprep and Capture, all from the default Client and Server Task Sequence templates.

One of the cool things to do is to make the LiteTouch process into a Fully automated No-Touch process (we reserve the ZeroTouch name for SCCM with MDT extensions :^).

Let’s start off with a Deployment Share setup specifically for image creation in our lab. To automate the process I have created an account on the local machine that has read/write permissions on the Deployment Share but is *not* a member of the local users group. I have also given it a random password.

In our Bootstrap.ini file, we add four lines to the bottom:




This will allow us to skip over the MDT LiteTouch Welcome Wizard, and connect directly to the Deployment Share.

I have created a virtual Machine used to capture Windows 8.1 x64 images, I booted up the machine and found out it’s machine BIOS GUID (check the bdd.log file), and in this case the GUID is: {29c80ff5-4dc4-4497-a035-472118542fd7}. Some people use the MAC address of the virtual machine.

IN our CustomSettings.ini file, in addition to the standard settings used by our regular deployments, I have added the following entries:

DoCapture = YES
ComputerBackupLocation = %DeployRoot%\Captures
BackupFile = %TaskSequenceID%.wim

First off I have changed the [Settings]Priority to add UUID. This means that the first thing processed in the CS.ini file will be the matching GUID section found in the file (if any). Within my GUID Section, I have created an application bundle to install my preferred applications set, Defined the settings to capture the machine back to the imaging server, and set everything else to full automation.

As one last trick, I take a Snapshot/Checkpoint of my Virtual Machine so that I can roll back the machine and restart this *automated* imaging process from scratch. This can be great for Patch Tuesday, just roll back and re-image. The only work on my part is to kick off the imaging, and review the logs when finished.


What about SCCM you ask? SCCM OSD (with MDT integration) has the same ability to install an OS, Applications, Sysprep, and Capture. Why not use that system?

Well, if you have a fully functional SCCM OSD deployment system ready, along with all the applications pre-packages, then yes, it may be a good idea to continue using SCCM OSD to create your images. However… if you do not have a fully functional system ready with all of your applications packaged (fully automated). I would not recommend starting with LiteTouch instead.

Running in the Administrator context in MDT LiteTouch allows us more leeway when building our images with unproven systems and components. We can see what’s being installed, see error messages on the screen, and debug in real time on the console. There is just no need to install the overhead of SCCM for a small contained process like imaging creation if you have not already automated everything in SCCM.


Now there is an important point to make here. If we can Automate as much as possible in the imaging process, typically the installation of our Applications, we can rebuild our core images over and over again with little efforts.

Of course there are some scenarios where Component “X” is difficult to install in a fully automated fashion (or we don’t know how to install). Some times we can *RePackage* the application using some 3rd party tool, or perhaps we can push the installation of this application to another process, perhaps during OS deployment, rather than during Image Creation. MDT LiteTouch also has a “Manual” step that can be added to the Task Sequence to allow an Imaging team to perform non-automated steps.

However, for the most part, my recommendation (and the recommendation from many at Microsoft) is that if you can automate the installation of applications, you should. As you can now leverage no-touch image building.

Image Factories

Once you have some of the basic settings defined for creating your image, the next step would be to automate the whole thing with PowerShell.

We can use PowerShell to Create User Accounts, Create Virtual Machines, Assign Network Switches, Apply our the LitetouchPE_x86.iso, and start. We can also use PowerShell to inject our No-Touch settings from above dynamically into the MDT Process.

While working for Microsoft’s own IT department, we would create multiple images at once, each for a different use ( Windows 7, Windows 8, Windows Server 2008 R2, Windows Server 2012, With *and* without Office ). Why would we provide an image *without* Office? Well, there are groups within Microsoft who don’t want Office, they are developing and DogFooding the *next* version of office :^).

We call this whole system an “Image Factory”. There are a lot of moving parts, but when done properly, rebuilding your image set for patch Tuesday is no problem.

Custom HTML Code within the MDT Applications Wizard Page

I developed this solution back in 2009, but I never got around to documenting how it would work in practice. It’s not really a feature of MDT, more like a hack for the scenario(s) listed here.
View the last picture of this post to get an idea of what this hack is about. :^)


Say you have an application that requires some additional information from the user during installation. For example, an LOB App that needs to know which local database to connect to run: San Francisco, New York, London, or Beijing? Now optimally, I would recommend a script to try to determine which local server to connect to, perhaps a table of Local Gateways and their associated Database servers. But that’s not always possible.

MDT provides a way to add pages to the LiteTouch deployment wizard, Michael Niehaus even has a tool on his Blog site where you can edit these pages, and create new ones. See: But are there any other ways to provide UI input options?

MDT Variables

A quick reminder about variables in MDT. Variables are typically created in the CustomSettings.ini or BootStrap.ini file, or the Litetouch Wizard, and then variables are consumed by the Stand Alone task Sequencer, and other Litetouch scripts. MDT (And SCCM OSD) all use the *same* namespace for variables. A variable set by the CustomSettings.ini file, can be read and modified by the Wizard, the Task Sequence Engine, the MDT VBScripts, and PowerShell scripts.

Note that when you put a variable in the CustomSettings.ini file, it must be present in the ZTIGather.xml table to be parsed by the ZTIGather.wsf script. If it is a custom variable defined by you, then it must be declared in the [settings] section using the Properties value:


Private variables that are set in the scripts, do *not* need to be defined in the ZTIGather.xml file, or in the [Settings]Properties entry in the cs.ini file.

Also note that you can pass variables to your application installs through the command line. ZTIApplications.wsf will expand the command line you specified in LiteTouch with variables selected by the % tag. For example:

Msiexec.exe /q /I MyPackage.msi PropertyX=%MyCustomProperty%

Application Wizard

MDT provides a wizard page to display available optional Applications for the user to select during installation. This page is displayed using the built in Windows HTML Applications program MSHTA.exe, and renders in a windowed environment.

MDT will parse the control\Applications.xml file in your deployment share, and render the display in in HTML based on the folder structure present in your MDT Console. Several fields are read from the Applications.xml file, including (of course) the name, and the comments.

One of the things that the MDT scripts do is to escape out any HTML tags present in the name and/or the comments. If an unsuspecting IT administrator added a poorly formatted HTML tag like “<table>” without the corresponding “</table>” closing tag in the comments section of an application, then it might cause the *entire* Application page not to display properly with an error to the user.

However for those IT administrators in the “know”, we could use this to our advantage.

LiteTouch HTML

The LiteTouch Wizard pages supports the full HTML Application schema, however, there are a few input types that we are most interested in:

<INPUT Name='Something3' type=’radio’ value=’value’ />
<INPUT Name='Something4' type=’checkbox’ value=’value’ />
<INPUT Name='Something5' type=’password’ value=’value’ />
<INPUT Name='Something6' type=’text’ value=’value’ />
<TEXTAREA Name='Something1'>Value</TEXTAREA>
   <OPTION Name='Something2' value=’Value’>First Item</OPTION>

After MDT renders this page in the wizard, it will look to see if the variable defined by the “Name” attribute has been defined, if so it will automatically fill in this value in the wizard. For example, if Something6 was defined it will fill in that value in the text box above. If Something2 is set to ‘Value’ then that entry will be selected in the listbox.

When you click “next” ( or the “Finish” ) button for each page, MDT will automatically read all the input variables, and set each variable defined in the “Name” attribute to the value selected.

There is a lot of spaghetti code in the MDT wizards to make the pages flow as smoothly as possible for end users, so please be aware that some scenarios may be more complex, but for simple edit boxes, and Listboxes, it should be fairly straight forwards.

The Hack

As I mentioned above, when MDT renders pages in the Litetouch Wizard, it will escape out the HTML tags for something safe in the system, it converts “&”to “&amp;”, “<” to “&lt;” and “>” to “&gt;”. It does this in the EncodeXML function in the ZTIConfigFile.vbs file. Now if we were to remove this EncodeXML call when processing the Comments section of the Applications.xml file, then we can do some interesting things.


sComments = EncodeXML(oItem.SelectSingleNode("./Comments").Text)


sComments = oItem.SelectSingleNode("./Comments").Text

Now that this code has been modified, any HTML code in the Comments section of the application will be rendered in HTML!

The Demo

Lets add some HTML code to the comments section of our application:

<select name=AntiVirusServerName>
   <option value=""></option>
   <option value="\\AntiVirusUS">AntiVirus NYC</option>
   <option value="\\AntiVirusUK">AntiVirus London</option>
   <option value="\\AntiVirusChina">AntiVirus Beijing</option>

Next, let’s modify the command line for our installation program to use the “AntiVirusServername” variable defined above:

cmd.exe /c RunSetupCommand.exe /Server:”%AntiVirusServerName%”

Here is what it would look like in the MDT Console:

And here is what it would look like when rendred by MDT:

Now we have a flexiable way add extra parameters to applications, and have them display associated with the corresponding application.

Danger! Remember, if you add in poorly formatted HTML code to the comments section of any Application, Language Pack, or Task Sequence, you may crash MDT, and create a bad user experience, So please be careful when using this Feature/Hack.

*WHEW* that was a bit log, but I hoped you find it useful ( or at least informative).


Re-ordering applications in MDT

Spent some time today updating tools that were out of date and needed a refresh.

One of the applications is my MDT Application Re-ordering tool.

Application Reordering

There are two different scenarios where you might want to rearrange applications in MDT.

1. Where application X needs to be installed *before* application Y. If so, use the dependencies option in the applications entry page in the Deployment Console. This is the only way to ensure applications are installed in any specific order.

2. However if you want the applications to *appear* in a specific order in the Deployment Wizard, then you can use the tool to perform this task. This also has a side effect any application selected in the MDT wizard will be installed in the order displayed in the wizard.

Behind the Scenes

After the user arranges the applications in the MDT2010Ordering tool, the tool will construct a powershell script to do the heavy lifting. The script essentially just moves the application out of the folder and back again in the correct order.

If you wanted to do all of this by hand, you could also just modify the control\ApplicationGroups.xml file to perform the actions. Be careful! :^)