Ask MDTGuy: Adding Applications to MDT as .exe

Got an e-mail from a reader today about getting MDT to install a file as an exe.

In a perfect world, we’d all have MSI files to install software from because there is largely almost no guess work other than what paramaters to set if we need to. Sadly though, it is often the case with stand-alone exe files there is no documentation, and we’re playing the guess and check game.

Finding the magic switch to install software can be a real pain. The best thing to do is to try it on a PC first until you get it then add it to MDT. I like to copy the file to the desktop of a VM, snapshot it, and open a command prompt from within the folder. This is done easily by typing CMD into the address bar of the folder from explorer. This way you can nail it down first, then add it to MDT.

I usually start with the most obvious switches, /s, /q, -s or -q.

Sometimes however, you need to ask nicely and use /?, -?, /h or -h to get it to tell you what to do. This works more often than you think. It’s a way to ask the executable for help and in some cases it works, the package creator spent a few minutes leaving this behind for us sysadmins. For all the developers out there who do this, we salute you!

Microsoft installers are USUALLY pretty good about using /passive /norestart on stuff like dotNetFramework and other stuff, if all else fails, there is the googles. Usually putting in the name of the product or installer file name followed by silent or unattended will do the trick.

As Johan says, “Happy Deployments”!

ASK MDTGuy: Two Deployment Shares?

Got a great question from a reader today: “I hear you often explain you have two deployment shares. Have you discussed this before on how you set this up? I’m just starting into MDT and am curious of best practices.”

This is actually a very good question, basically we can use MDT to build images for us as well. I use one share to build images and another roll them out to users. This makes building images faster, more consistent and reliable. Since you need to do it about every six months at the very least, it also removes human error from the tedious checklists that I am sure you are used to. The build share uses the same standard client task sequence you use to deploy the OS, it just starts with the vanilla .wim from the .iso. We add a few additional steps like installing office, dotnet4.5, VC++ and pausing to let you manually tweak a few things here and there before resuming the build. The pause is done by adding a call to the LTISuspend.wsf script hidden inside MDT.

The reason you use two shares simply comes down to the fact that settings that would be appropriate in your build share wouldn’t be appropriate in your production share and vice versa. For instance, there are settings in the .ini file that automate the image capture. The setting DoCapture=YES is configured in my buildshare for example. The production share also has drivers that you want to keep far away from the build process as I am sure you know when it comes to images, driver free is the way to be.

As far as best practices are concerned, I cannot stress the following point enough because at least once a month I have to explain to somebody one very key point: ALWAYS build your images in a Virtual Machine. By building a “driverless image” in a VM, you effectively ensure that the image will run on anything; a laptop, a desktop, an HP, a Dell, or a Lenovo. Don’t worry about drivers in your image. MDT can be configured to inject the drivers “automagically” in the productionshare using the glorious “total control” model taught by Mr. Johan Arwidmark



ASK MDT Guy! Building Images in a VM?

Today I got a pretty good question from Matt:

Just a quick question regarding images. I currently build my images using a physical machine which has been fine up until now. When I image, I go from the main desktop I’m collecting an image from to deploymentshare$\scripts\BDD_AutoRun then choose a “Capture” task sequence. During this, the image is Sysprepped but you can only sysprep 3 times as the “Windows rearm count” depletes. With this, my question is – How do you get around the windows rearm count/sysprep and (If I remember correctly) you use a virtual machine – which vm host do you use? Is there a better way I can be doing this? The only time I update my image is when I need to put more “windows updates” onto it.

Thanks for writing! First thing’s first: Stop building images on physical hardware! I know, I know, It’s the tried and true old-school method, but as you are seeing: it’s not cutting it anymore. I also have seen that Michael Nystrom has discussed how he won’t help anybody who still does this. This methodology may have worked okay in the past, but today its largely regarded as deprecated, and using a VM is simply best practice. I know there’s a task sequence labeled Sysprep and capture, but it’s highly unreliable, don’t use it. The two key advantages to this is that you can pretty much fully automate the build (removing human error) and since its built in a VM, you can get the image to run on laptops, desktops, tablets, you name it. MDT can inject drivers for you. This allows you to build a new image once a month, or once a quarter, or once every six months, whatever works best for you.

I use Hyper-V because its free with any copy of Win8.1 Pro or Enterprise. Use DISM to install Hyper-V from a PowerShell console. If you’re not on 8.1 Pro or better, get a copy of VMWare player or VirtualBox.

Finally, I use a separate deployment share that’s optimized to fully automate an image build. This way I have one share for building, and another for deploying. I have this share run a standard client task sequence install IE, Office, DotNet Framework, Visual C++ Runtimes and then it patches OS and all the MS stuff I’ve laid down. When it’s all done I have it simply capture after resuming the task sequence (use the ltisuspend.wsf trick) and then I import this image into my production share.

ASK MDTGuy! Redundant Windows Updates?

Got a great question from a long time reader today:

I’ve googled this and cannot seem to find a good explanation.  What are the differences between the following with regards to windows updates:

  1. Task Sequence, Windows Update (Pre-application)
  2. Task Sequence, Windows Update (Post application)

The updates are taking an obscene amount of time to apply (2 hours).  And I just updated the image with updates on 7/1.  So, I’m just trying to find anything I really don’t need to be doing (redundant steps).

Both are simply steps in the task sequence that will pull windows updates from either Microsoft’s public site or if you have MDT configured to do so; pull from a WSUS server. I use WSUS at work, and it runs like a champ. There’s two separate steps in the default task sequence (pre-apps and post apps) incase you’re installing office or some other MS apps, its going to run updates for those again, post application install. This is particularly helpful when you’re building images, or you’re installing Visio for instance as part of a bundle as part of a state restore for a limited amount of users.


As an image gets older and older, the updates will take longer and longer to apply. Fun story, I once had an image that took 28minutes to run as a task sequence, over the course of a year and a major office service pack later, take almost two hours to run. Its not that the network was getting slower, it was the windows updates steps that were pulling from a public site (we had no WSUS at that location) and then patch, and then patch again, and those were taking longer and longer to run as the image got older.

Best advise is to get a WSUS box up ASAP, and make sure you have a build lab that you use for building images.

My build lab is a separate deployment share separate from my production share that’s fully automated and optimized for building images once a month after patch Tuesday, I call that day image thurdsay. We reimage a lot where I work, so having a freshly patched image keeps our image times down to a minimum.

ASK MDTGUY! Legal Disclaimers Making You Click Again and Again?

This week’s question comes from Rob Dog.

Q: I’ve run into our computer usage disclaimer that requires us to click OK on before being asked to enter username and password. This means that I have to click OK at each reboot. Is there a way past this?

A: Oh, GPOs that break the glory of MDT’s automated installations… The bane of my existence.

There’s three real ways around this. I say real because the option of turning off GPO enforcement with hacking the OS is not a good idea, and I’ve never gotten it to work. The best way to fix the problem is to just turn off these legal disclaimers (they’re bullshit and I find that not only does nobody read them, I’ve never been convinced they’re really all that legally binding). Second is to use some kind of staging OU (an OU w/o these settings and just move them later). And third, remove the join credentials from your XML and just move the recover from domain step in your TS to the very end of the TS. This creates some problems, but it’s what I do at the moment.

Johan teaches us to use WMI filters on the task sequence folder, I need to try this at my job to see if it works, seems like a good idea because it allows you to say that a specific policy won’t apply if a folder MDT uses and then deletes at the end exists. I need to try this, but options 1 – 3 listed above should be valid.

See Also:

ASK MDT Guy: Adding Scripts as Applications to MDT

Just received a great question this morning from Paul who’s asking about adding scripts as applications to MDT shares to be selected as needed at deploy time.

It is possible to add a really small/simple .bat file I created as a Standard application that users can just click on the checkbox to add the application at deploy time?
The .bat file is really simple: just displays a few lines in the command prompt and does one other thing, and at the end, there is EXIT, where it closes the command prompt.
I right-click Applications in the Deployment Workbench and choose “Application with source files.” The main question I got is, what would I put in the Quiet Install Command?


The short answer to this question is yes. You can very easily add batch scripts as applications in a share to allow helpdesk staff to install them “as needed” at deploy time. I would only do one bat per application, nesting them like that may be half the problem right there.

I however, prefer to use VBScripting (vbs) or windows script files (wsf) whenever possible for a number of reasons, the least of which is the numerous limitations of bat files regarding UNC paths and logging.

If you’re in a crunch, and you know your batch script works, go ahead and use cmd.exe /c script.bat as your quiet install command.

The /c tells the command prompt to close when it’s done.

As for importing the applications…

You could do it one of two ways, import it as an application with source files, and point to a folder that has the batch or vbs script in it.


Sometimes, I’ll also import it without source files and use the %SCRIPTROOT% variable and just throw it in the script root directory, which I’m trying to not do anymore, as the advantage of the first method is that if I need to move this application to another share down the road, it’s a simple drag and drop operation.

Why would Microsoft make such a simple thing so complicated?

Today I got an e-mail from a reader Blake, who was wondering why Microsoft would make imaging more complicated with MDT as opposed to the “simplicity” of basic disk cloning tools that were ubquitious over a decade ago.

Question: Why would Microsoft make such a simple thing that has been used for so many years so complicated?

Answer: Agreed, the complexity of MDT is quite intimidating, I must admit, the first time I opened MDT I had no idea where to even begin.

I’ve been using MDT for over two years here now and there’s no way I would go back to the nightmare of managing a single image for every single make and model. That wasn’t simple. That was hell. I would never go back to having to run windows updates by hand on a six month old image. That was not easy, that was a pain in the ass. I wouldn’t go back to installing one off applications by hand, that wasn’t simple, that was a chore. The dynamic driver injection, the windows updates, and the dynamic provisioning is really hard to beat. When I worked for Xerox I reduced their image count from over twelve to two, one for 32bit and one for 64 bit, it was beautiful, long gone are the days when you’re maintaining dozens and dozens of images. Again, today I have one 32bit win7 image, and one 64 bit image, that’s beautiful.

Understand, that back in the early days of Windows XP there were no Microsoft tools, there was sysprep, and that’s as far as Microsoft would take you, but that all changed with the release of Vista, which was released with the first version of the AIK. MDT was still called the Business Desktop Deployment Kit (BDD), but by the release of Vista SP1, it was rebranded as MDT.

Now, for the Thick vs. Thin image debate… Fans of thick images say that they’re faster, and fans of thin images say they’re more flexible at the expense of speed. I honestly don’t like either. I’m a big fan of the “hybrid” image.

Seriously, take what’s good about thick images (speed) and what’s good about thin images (flexibility) and you can have your cake and it too. Build the image in a VM with MDT and the cost of maintaining your images is cut exponentially.

Here’s what I recommend to lots of people; Put Office, C++ runtimes, and DotNetFramework in your image, along with the windows and office updates, but keep Java and Flash out (they’re easy to add back in as they get updated which is weekly.)


I also suspect that you’re missing the real benefit of using hybrid images with MDT and that is the flexibility. I personally have to support a call center, accounting, marketing, sales, lending, human resources and an accounting department. I don’t have the time to build ten different images for all the departments I support, so I build a pretty generic image that has office, c++ runtimes, silverlight, etc… and then I use MDT’s wizard to let me customize what apps I install at deploy time, and that my friend is what separates a tool like MDT from a toy like clonezilla.

For more questions and answers: See the Ask MDT Guy Page