Had the privilege and honor of studying under both Johan Arwidmark and Mikael “The Deployment Bunny” Nystrom in Redmond WA for a week this last summer. During the session Mr. Nystrom brushed up on using ZTIGather.wsf to debug customsettings.ini verbosely from the command line instead of actually kicking off a deployment and crossing your fingers.
I had known it was possible, and I had done it before, but didn’t need to do it in a long, long time until today and in an attempt to find his again, I stumbled across this…
In the article, he’s referring to a local install of CMTrace, in this example I am running it from the server. I figure the PC I am testing on may not always have it installed, but the idea is the same, run the gather and run CMTrace and open the log file! Its that easy! My buddy from Questa thinks this is pretty cool! Thanks to Mikael for the good idea!
Just as I had convinced myself I had solved this, it’s back…
A clean MDT2013 task sequence, with a recently built hybrid image really shouldn’t ever take much more than an hour at the most to complete. I have a land speed record of 18 some minutes on bare metal from PXE. MDT should drop an image, patch it, install mandatory apps like antivirus and management clients then join to the domain in about 30 to 45minutes. But once again I’ve been seeing PCs sitting on ‘Searching for Updates’ for over an hour dragging this out close to two hours.
Checking the BDD.log file, there it is…
Windows Update in MDT Taking FOREVER to search for updates.
You may have seen this, Windows Updates just searching and searching and searching…
Stumbled across this thinking once again that it had to be related to a bug in the windows update agent, sure enough I found this:
I received an e-mail today from a reader who was asking about why I use two deployment shares, why not just one? In his defense, I think he was confused, but I’ll try again to stress that MDT can be used to build images for us, and that this is preferred for many reasons, the least of which is just saving time. If I can automate it, why do it by hand? Anyway, here’s a rant laced with excerpts from the exchange.
I use two shares, one to build images and another to deploy them. The first I name BuildShare and the second I name ProductionShare. In my image, I put Office, DotNet Framework, Visual C++, and Silverlight. I don’t put much else in there, but I let MDT patch those and then there’s some reghacks I run for branding and/or defaults I want to bake into the image. This is just a standard client task sequence that’s set to capture the image for me at the end since it IS a separate share and it is set to do captures at the end of running task sequences.
This is the reason I use a separate share. It is so that I can configure the customsetting.ini file to automate a large part of the capture process. I simply set DoCapture to yes and by running a standard client task sequence, MDT will capture the image for me! I can then take a fully patched system and send it off to the production share. That’s what installs Flash, Java, Adobe, all that crap that’s always changing or depending on dept. Its easier to just update the MSI in the share once and be done with it.
This task sequence is fully automated, the hardest thing I have to do is simply boot a VM and select what TS I want to run, and go get some tacos. MDT builds and captures the image for me.
Why even image at all? Why not just install everything by hand? We use images to save time. Its all about that final deploy time. Often when you need to re-image a broken computer, that PC needed to be ready hours ago, but it’s not, it’s dead because some user clicked god knows what in an e-mail. Deployment’s more than just about bare metal installs, this is about being able to use operating system deployments for break / fix. The better patched that reference image is, the faster we can get users up and running and back to work. Fewer things get clients excited than telling them that not only can MDT image the PCs, but it can build these images for us as well. My build lab builds me a new “shinny” image every month and our deploy times stay under half an hour. The older I let reference images get, the more patching they need before I can let a user sign in.
Having a build share allows me to build a new image and automate that build process and save hours in the process. This then also has the advantage that I can reliably repeat the EXACT same capture six weeks or six months later when I need a new image with all the new patches, and I can keep my deployment times low.
The time to install office may seem minimal now, but the time to patch office, dotnet framework, visual C++ and the core OS grows and grows and grows as the image gets older and older. Simply put given enough time this patch time balloons from a dozen updates into hundreds of updates and your ability to patch a thin image on the fly becomes less and less reliable. I’ve seen a 14 month image go from 28 minutes to deploy to almost three hours thanks to office and dot net service packs that need to be downloaded and applied. Its just more reliable to have a TS that can cook a new image for me that I can reliably run once in a while to keep those times down because I can promise you the one hour difference you see now between a thin and hybrid image today becomes more like three hours a year from now. I’m not suggesting you put java, flash and adobe reader in the image, just put office as well as visual C++ runtimes and dotnetframework because MDT can at the very least patch those prior to running sysprep and capture for me. Think of this ‘hybrid image’ as a software installation platform that your core line of business applications will need to sit on.
With all that being said, its important to note that there are very specific things that should be done on your production share’s task sequence such as domain join, windows activation, bitlocker, and antivirus installs.
For a complete guide on how this is done: See Johan’s guides: I follow them religiously for one reason: deviation from his teachings all but guarantees pain. Compliance ensures happy deployments. ’nuff said.
Had the privilege to study under the two “Jedi Masters” themselves Johan and Mikael this summer in Redmond, and some of what they covered there they presented last week at Ignite. Check out their video below!
Mikael covers using MDT to build images, which is my preferred way to build images. Its definitely worth checking out!
Nothing like testing a new image build task sequence. It’s about as fun as watching paint dry, but I’ll take it any day of the week over building them by hand like the clonezilla peasants do.
I’m a big fan of using MDT to build images for several reasons, the least of which is just saving time, I’d much rather engineer the automated tweaks now, and be able to consistently and reliably be able to automate that same build once a month after every patch Tuesday. Given enough time and effort, you can fully automate your image build to the point that you don’t need to run ltisuspend to do some of these things by hand in the image.
I know, I know, lots of what I’m about to show can be done with GPOs and yes, its probably better to keep your image on the lean side, but these are just examples of how you can use commandline steps to call reg.exe and add or delete certain registry keys directly in the task sequence when building a reference image. Remember, with great power comes great responsibility…
To call reg.exe is easy, simply create a single commandline step in your tasksequence.
In the past, I’ve exported and imported .reg files, and configured them as applications, which does give the advantage of them being somewhat modular, but in this case, I simply just need to make a dozen or so tweaks to the default admin account prior to running sysprep, this way this wallpaper is the default for the new user as I use the copy profile trick to get this set as the starter wallpaper on the new image.
Here are a few others I throw in the image so they’ll end up in the default profile.
I’ve been busy rebuilding a whole new MDT server from the remnant of our legacy build share here at the office, and I figured since I’m figuring out better ways to do old tricks, I’d figure I’d show you guys how to enable features within the task sequence by calling to DISM via commandline.
If you know what feature you’re going to need to enable, great if not, use dism /online /get-features to list the feature you’ll need to enable.
Simply add a General Step to your task sequence to Run Commandline
In this example, I’ll be enabling MSMQ Triggers