Why Should I Attend the PowerShell Summit?

Starting last fall for the European PowerShell Summit and this year for the North America PowerShell Summit all the session recordings are available on YouTube. Because of that you may be asking yourself “Why should I go to the Summit when I can just watch everything online?”. Here are ten reasons, in no particular order other than the order in which my brain dumped them out.

  1. You get direct interaction with the product team. And I am not talking about members of the PowerShell team sitting in a corner not interacting with anyone.  They are there to get as much feedback as possible, learn how people are using it, and to directly interact with members of the community.  This isn’t the Microsoft of 10 years ago or two years ago.  When they say they want your feedback (and not just the good stuff) they absolutely mean it.  Special thanks to Lee Holmes, Michael Greene, Joey Aiello, Angel Calvo, Hemant Mahawar, and Kenneth Hansen for making the trip out to Charlotte.
  2. I don’t care how much you interact with other members of the community over Twitter, Email, Google Hangouts, whatever.  There is no substitute for meeting people face to face, shaking their hand and getting to know something about them besides how they use PowerShell.
  3. And combining #1 and #2, you also get to talk to (and listen to) people talk about how they have solved problems using PowerShell, and what their thought process was around creating that solution. You can then ask people, I have this problem, what would you do to solve it?  One of those conversations alone is worth the price of admission.
  4. You get to watch Mike Robbins “harass” Rohn Edwards all week by telling everyone how great his sessions are going to be and how everyone needs to go to them.  By all accounts they were awesome.
  5. You get to see the look on Dave Wyatt’s face Microsoft announces that “his code” (Pester) is shipping with the next version of Windows Server
  6. You get to have Steven Murawski answer your questions about creating Custom DSC Resources while you are creating them
  7. You get a free Chef T-Shirt, courtesy of Steve.
  8. You get awesome Nano Server and PowerShell stickers courtesy of the one and only Jeffrey Snover.
  9. You get to watch Jason Helmick live and in person talk about how he has his depends on.
  10. You end up finding a bug in Class based DSC Resources that you only found because you participated in the DSC Resource hack-a-thon at the PowerShell Summit.  So make sure you vote on that!
  11. Bonus!  Jeff Hicks gives you a signed PowerShell Deep Dives book and a 30 day Pluralsight subscription to give away at your next user group meeting.
  12. Bonus!  You get to watch Jeffrey Snover demonstrate and talk about a bunch of stuff I can’t repeat or talk about upon fear of death :).
  13. Bonus!  You get to talk to (and listen) to June Blender talk about PowerShell, PowerShell Help, and writing.  Her passion and knowledge around those topics is unbelievable.
  14. Bonus!  You learn how little you really know about PowerShell.  This is a good thing!  This is also something I relearn on a nearly daily basis.

I was also asked by Josh Duffney on Twitter what I thought were some of the “must watch” videos from the Summit.  The lame answer is “everything”, but that’s also not realistic.  If you put a gun to my head and said “you have to pick 7 sessions” here are the 7 I would pick (no particular order).  All the Summit videos can be found in this playlist on YouTube.

  1. Kenneth Hansen & Angel Calvo PowerShell Team Engagement
  2. Don Jones DSC Resource Design
  3. Dave Wyatt on Automated Testing using Pester
  4. Defending the Defenders Part 1 & 2
  5. Debugging
  6. PowerShell Get
  7. Ashley McGlone on DSC and AD
  8. PowerShell v5 Debugging (There is also a session on Debugging PowerShell by Kirk Munro.  These are different)

Demo DSC – Part 3

In Part 1 of this series I talked about how I demo’d the building of a Domain Controller. In Part 2 I talked about demoing the building of a Pull Server, an App Server, and then using the two servers to show how a Pull Server works and what needs to be done to make the magic happen. If you didn’t read Part 1, here is the disclaimer:

This was never intended to demonstrate all the features and capabilities of DSC (there’s a lot!), but instead was done to show at a high level the kinds of things that are possible and to start a discussion about where it fits into our organization immediately and going forward

My outline for this part of the demo looked like this:

  1. Build Web Server
    1. Run BuildWebServer Script on the Web Server
    2. Talk about what’s going on while the server reboots
      1. File copy after domain join
      2. Install of Roles and Features, IIS Components
  2. Post Reboot
    1. Show IIS Site(s
      1. Show default as stopped
      2. Show DSCTest Website
    2. Browse to site from App Server – http://<WebServerName>:8080
    3. Break Web Server
      1. Change IIS Binding
      2. Delete WebSiteFiles Folder
    4. Show broken site from App Server
    5. Talk about various ways this could be fixed (Push/Pull)
    6. Run the BuildLabWebServer script on the Web Server
    7. Show working site from APP Server

For comedic purposes, here is what my awesome Microsoft Word Website looked like that I break in this demo:



Here is the Configuration script in its entirety. It’s also available on GitHub.

Demo DSC – Part 2

In Part 1 of this series I talked about how I demo’d the building of a Domain Controller. In Part 2 I am going to talk about how I demo’d building a Pull Server, an App Server, and used the two servers to show how a Pull Server works and what needs to be done to make the magic happen. If you didn’t read Part 1, here is the disclaimer:

This was never intended to demonstrate all the features and capabilities of DSC (there’s a lot!), but instead was done to show at a high level the kinds of things that are possible and to start a discussion about where it fits into our organization immediately and going forward

The outline for this part of my Demo looked like this:

  1. Talk about the purpose of a Pull server (Can also be used to push and write Configurations)
    1. Show how nothing is configured (name, domain, roles/features, etc
    2. Open ISE, Run BuildPullServer Script
    3. Will reboot. While rebooting show the computer account on the DC
    4. Login as domain account
      1. Create share C:\WebServerFiles, share with everyone (explain why we need it later). Explain that this could have been done with DSC, I just choose not to.  This Share will come into play later.
      2. Copy website files to this share (I created a “website” in Word to use with a Web Server, that will come in the last part of this series)

Here is the Configuration script in its entirety.  It’s also available on Github.

With that done I then built what I called an App Server. Don’t think that I somehow deployed an Application using DSC (I didn’t) but with a Web Server the last part of my demo I needed to call it something that sort of made sense so I called it an App Server. The build script for the App Server is below, and you can see that it’s much smaller than the previous two build scripts. In this case I wanted to show a minimal configuration for a build script and then demonstrate the process of configuring the App Server to pull a new Configuration.

Here is the Configuration script in its entirety.It’s also available on GitHub.

With that done, the next step is to create a Configuration on the Pull Server for the App Server to Pull. All this Configuration does is change the TimeZone on the App Server. Nothing fancy here. There are also some other pieces at the bottom of the Configuration script I should talk about. I have hardcoded a GUID for the server in the Configuration. You can either use this one or change it to your own. I am setting the source and destination paths and sticking the GUID onto the end of the .MOF file, which is required when you are pulling a Configuration. This GUID is how the server knows which Configuration belongs to it (as we will see here shortly). I am then copying the file from the source path to the destination path, and then creating a Checksum file for the .MOF (which is also required).

Here is the Configuration script in its entirety. It’s also available on Github.

With that done, we need to do one other thing before this is going to work. I pre-copied various DSC Resources to the Pull Server, so now we need to .ZIP up the XTimeZone resource so that it can be copied to the App Server when it pulls it’s Configuration. You do this by creating a .ZIP file of the xTimeZone Module and appending the version number to the end of it. In this case, my file name after creating the .ZIP Archive is xTimeZone_1.0.0 . This file then needs to be placed in the “$env:PROGRAMFILES\WindowsPowerShell\DscService\Modules” folder, which is the ModulePath we specified in our Pull Server Configuration. Once that is done, we need to run the command below to also create a Checksum file for this Archive.

Next we need to create a Configuration to tell the App Server to Pull it’s Configuration. This is done by changing the Local Configuration Manager (LCM) settings on the App Server. In my demo I built this Configuration on the Pull Server and then pushed it to the App Server. The outline for this part of the demo looked like this:

  1. Create LCM Configuration. Comment out the Set line in the script. Explain the meta.mof file.
  2. Show LCM Configuration on App Server
  3. Show Consistency Task settings (there should be none)
  4. Push LCM Configuration from Pull Server to App Server
  5. Show LCM Configuration on App Server compared to previous
  6. Show Time Zone. Run Scheduled Task.
  7. Watch App Server for Time Change
    a. Change Time Zone again to something totally random
    b. Run Consistency Task again, watch Time Zone change again

Here is the Configuration script in its entirety.  It’s also available on Github.. You should also note in this script that the Configuration GUID from before makes an appearance here as well. This GUID is what tells the App Server which Configuration to look for on the Pull Server.

PowerShell Summit 2014 Recap – Day 1

Well PowerShell Summit 2014 is in the books, and wow, what an event. I feel like I could write an entire blog post about how great the food alone was. If you didn’t make it to the event, I cannot express strongly enough that you need to do everything in your power to get to PowerShell Summit 2015 in Charlotte, NC. Start planning now. Start talking to your boss(es) now. Halfway through the very first day I felt like I had already gotten my money’s worth. Not only was the content of every session fantastic (and in most cases mind blowingly awesome), but the connections you make with people will pay for the trip itself. It was awesome to be able to meet and speak with people I had only previously known from reading their blogs, interacting with them on Twitter, or watching videos of presentations they had done on YouTube.

On to the recap!

Session #1
PowerShell Just in Time / Just Enough Admin – Security in a Post-Snowden world
Jeffrey Snover

This session was a tremendous way to get the week started. Just Enough Admin (JEA) is a PowerShell Toolkit focuses on securing your environment by reducing exposure to admins and administrative accounts. One of the things Jeffrey talked about what was an NSA document exposed by Snowden that showed the NSA was actively targeting Systems Administrators. You can find the document itself here and a breakdown of that document here.  The JEA Toolkit allows you to reduce the number of admin privileges and the scope of those privileges by being able to perform admin tasks without being an admin.

Briefly this is how it works:

  • JEA Toolkit allows you to create remote endpoints on servers with a specified set of abilities (restart services, run certain commands, etc.)
  • These endpoints create a local admin account with admin access that anyone that connects to that endpoint runs as when performing tasks
  • This local admin account has a 127 character password that is reset nightly (or more often if you like, however each time its reset requires a WinRM restart.  This will be fixed in a future release).

Next Steps:

  • Works Hours – Who can access which endpoints when
  • Work Tasks – One shot work hours
  • 2 Factor Authentication (I am pretty sure this is right, my notes and handwriting on this are a little sloppy)
  • DSC Driven Safe Harbors and Jump Boxes (more on this in a different session recap below, but HOLY CRAP!)
  • GUI Tools/Toolkits over JEA Endpoints
  • Approve users for a specific endpoint for a specific time frame (ie 5 minutes to restart a service)
  • Collect logs of a JEA session for an audit trail

Session # 2,3,4
The Life and Times of a DSC Resource
Building Scalable Configurations
Patterns for Implementing Configuration with a DSC Pull Server
Steven Murawski

Since these are all about DSC I am just going to lump them all in together. Having all three of these sessions back-to-back-to-back was really beneficial because they all built on each other. It was awesome to see the real world Configurations Steve is using at Stack Exchange and just as importantly, the process he went through to get those Configurations to where they are today.

Some notes from the session(s):

  • DSC Resources should be Discrete, Flexible, Resilient, Idempotent, Chatty (in the logging sense)
  • Use and love Test-DSCResource when building your own Resources
  • Friendly name of a Resource can (and probably should) be different than the Resource name
  • Writing your own Resources requires debugging and error-handling.  DSC Resources are not interactive.  Write-Verbose is your friend.
  • DSC Resources run in the System context
  • Every Configuration he uses has a ConfigData Hashtable in it’s own .ps1 file
  • You can filter your AllNodes data like Node $AllNodes.Where{$_.Role -Like WebServer}.NodeName
  • Composite Resources are key!  Helps to streamline the creation of the .MOF document
  • Considerations for Implementing a Pull Server Environment:  Build Script(s), Source Control, Build Servers, Operations, Logging
  • There are modules on GitHub he has created to speed up and streamline the process of creating, building and deploying Configurations.  You can find those here.

Session #5
Using PowerShell Workflows
Trevor Sullivan

I personally didn’t see anything here that I hadn’t seen or heard before, so I don’t have much to say about it. Some notes that I did write down were:

  • Only supported in Version 3.0 or Later
  • Remoting Enabled requires ports TCP 5985 and TCP 5986 (For SSL)
  • Can be setup to use SSL (which from the way he talked about it, sounds painful)

Session #6
SCOM – PowerShell Goodness
Jeff Truman

If you use SCOM on a regular basis there wasn’t anything new here.  However!  It was worth going because one of the attendee’s (I don’t know who it was) said that using Active Directory integration that you can install the SCOM agent on a template, and when the machine comes online the agent will show as Managed, and not as a manual agent install you need to approve.  I need to figure out how to do this!  If anyone has information on this I would like to speak to you about it :).

Session #7
Proper Tooling through PowerShell
Jim Christopher

This session was great.  Jim is a great speaker and was able to present using a lot of humor and a real world example that was easy to follow and he explained how it applied to everyone in the room.

Some takeaways from his presentation:

  • Tools should assume batch operations, not single ones
  • Do not assume a human presence.  That is, don’t assume someone is sitting there waiting to put in input or respond to the tool
  • His Entity Shell Module that he created and used for this presentation can be found here.

He also had a hilarious exchange with Steve Murawski.  Jim kept using ForEach in his example and Steve commented that he “died a little every time you use ForEach”.  Jim responded by typing out a bunch of ForEach code blocks on the screen which got a laugh from everyone.

That’s it for the Day 1 recap.  Day 2 coming later!



PowerShell Desired State Configuration (DSC) Journey – Day 2

If you haven’t read the DSC E-Book by Don Jones, do yourself a favor and do that.  It can be found here.

If you missed Day 1, you can find it here.

One of the reasons why I mention Don’s book (and not just because it’s awesome and he needs people to review it) is that I immediately learned several useful things.  Of the most importance to me on Day 2 of this journey is that you can parameterize your Configurations the same way you can parameterize PowerShell scripts and functions.

When we left off yesterday I had a Configuration Script that may or may not have been functional and I was getting ready to run it.  Here is the script in its current form.

Knowing what I know now about parameterizing the Configuration, that’s clearly not going to work.  So let’s add an actual parameter block and make it look all proper.

What I am wanting this resource to do is copy the DSC directory (and it’s contents) into the C:\Scripts folder on my test server.  As of right now this Scripts folder doesn’t even exist, so I am hoping this resource knows how to create it.

Back on the Get Started with Windows PowerShell Desired State Configuration page, it states to run the script and it will appear in the console pane.  It runs.  Nothing breaks.  Great success.

Next, I need to enact this configuration by invoking the configuration.  However, before I do this another little nugget I picked up from Don’s E-Book is that when you run a configuration it’s going to create a folder and one or more MOF files (depending on your number of target nodes) and will do so at whatever folder the ISE is currently pointed at.  I would have assumed that it would have placed the files in the same directory that the modules are installed in, but guess not.

I run the Configuration Script by typing TestConfiguration in the ISE, and I am idiot.  This is what happens when you are trying to type fast and not paying attention.


So I fixed that problem, and ran the Configuration from inside the ISE so that my session got the change and tried again.  I intentionally ran it without the ComputerName parameter just to see what would happen.  This happened.


Excellent, so knowing what I know about parameters and PowerShell, will this work?

Run the Configuration again, try to enact the Configuration with no Computer Name and…..it asks me for the ComputerName.  Excellent.  Before I put in the name of my test server here is what the folder structure on that server currently looks like.  No script server, no DSC server, and no PowerShell script.


It completes with no errors.  In my C:\Scripts directory I know how a TestConfig folder and one .MOF file with the name of my server.  So far so good.  Reading farther on the TechNet documentation there is this line “To specify a different directory for the MOF files, use the OutputPath parameter when invoking the configuration.”  That is good to know.  Just to test this out I run my Configuration again with the command below and that works as expected.  Good to know.

On to the last step!  First I feel obligated to put this section from the TechNet article in case people aren’t following along.  “The Wait parameter is optional and makes the cmdlet run interactively. Without this parameter, the cmdlet will create and return a job. If you use the OutputPath parameter when invoking the configuration, you must specify the same path using the Path parameter of the Start-DscConfiguration cmdlet.”  I first try to run the command below, with no ComputerName parameter again because I am curious if DSC is smart enough that it sees there is only one .MOF file and just automatically applies it.

And…..failure!  About time something exciting happened.

My target server is Windows Server 2012 so it should have WinRM already configured.  Just to make sure I login and run winrm quickconfig and sure enough it’s already enabled.  Strange.  I next remember another note from Don’s E-book where you need to have KB2883200 installed on a Windows 8.1 machine or DSC will not work. I run Get-Hotfix, it’s there.  Damn.  Well, now what.  Quick thought am I running PowerShell ISE as Administrator?  Yes I am.  Damn.  Just to double check let’s run the same thing from just regular PowerShell.  Same thing.  I run the command with the -ComputerName parameter and specify the target server, same thing.  Well……on the target server I run Enable-PSRemoting (even though it should already be configured) and yes, it is already configured.

I continue reading through Don’s E-Book because I feel like the answer to my problem is probably in there somewhere.  Under the section on Pushing Configurations I find this:  “What this command will not do is take care of deploying any resources that your configuration requires. That’s on you, and it’s pretty much a manual process if you’re using push mode.”  Well that probably explains the problem, it probably has no idea what a File resource is.  I immediately wonder, is the version of PowerShell on this server 4.0?  $PSVersionTable tells me, no it’s not, it’s 3.0.  Which I am going to assume is also a problem.  I download and install Windows Management Framework 4.0 on the server, and while doing so immediately think that if I am going to need to do this for all my 2012 servers I wish I could do it by using DSC and not SCCM or Windows Update Services.

Anyways, I install WMF 4.0, reboot, run $PSVersionTable and it’s 4.0.  I run Get-DSCResource and I get the list of installed resources including the File resource.  Now this is looking promising, let’s try again.  I got farther this time no red text.  I will spare you all the Verbose output except for this part.

[[File]ScriptPresence] The system cannot find the file specified.

[[File]ScriptPresence] The related file/directory is: C:\Scripts.

[[File]ScriptPresence] The system cannot find the path specified.

[[File]ScriptPresence] The related file/directory is: C:\Scripts\DSC.

[[File]ScriptPresence] The path cannot point to the root directory or to the root of a net share.

[[File]ScriptPresence] The related file/directory is: C:\Scripts\DSC.

LCM:  [ End    Test     ]  [[File]ScriptPresence]  in 0.1250 seconds.

Problem is that it doesn’t like my path directories.  For one thing my Source Directory in the Configuration isn’t what I made it on my computer, so I fix that.  It also doesn’t appear to like the fact that the destination is missing the Scripts folder.  However, I check the target server and the Scripts folder is there.  Excellent news.  After I fix the configuration I run all the same commands that I ran above to generate a new .MOF file.  Now everything looks good, except my .ps1 file didn’t get copied.  Well perhaps I shouldn’t set -Recurse to $False in my Configuration?  I change that, re-run everything again…AND…..failure.  I suspect that this needs to be on a share that is network accessible and not my local hard drive.  That will be a good point to pick up on tomorrow.  For now, I need to go run some 800M intervals.

PowerShell – Find Hyper-V Dynamic Disks

Use Start-Sleep to Delay PowerShell Script Execution in Operations Manager Recovery Task

I had an issue where I needed a monitor in Operations Manager to start a service when it crashed.  Which is easy enough to do (you can find all kinds of examples on it), except that in my case the service wasn’t just stopping immediately.  It is a custom homegrown application that detected a dropped connection, wrote an event to the event log (which Operations Manager was watching), and then killed the service.  The problem is that it would write the event, and then attempt to stop the service, which took several seconds to stop.  In that interval Operations Manager would pick up the event, try to start the service, determine that it wasn’t stopped, and fail.  So, I needed to figure out a way to “delay” the recovery task.  I couldn’t find a way to do this natively in the recovery task, so I did it in PowerShell.  The PowerShell “script” itself was a simple two lines.

This forced the script to wait 30 seconds to run, and then started the service.  Here is what the recovery task in SCOM looks like.


Granting Permissions to Operations Manager 2012 Application Advisor Console

First, a disclaimer:  I wasn’t around when this environment was setup, I was hired several months aftewards and when I found out that Operations Manager wasn’t being utilized at all I took it upon myself to change that.  So, that being said, this may not be applicable to your environment, but this is what worked for me.

The first issue I ran into was that I was able to login to the Application Advisor console from the Operations Manager server, but if I tried from my desktop, it would not allow me to login.  The application log on the Operations Manager server recorded the following:

Which, didn’t really tell me a lot.  However Googling variations of this error message lead me to this article on TechNet, which in turn led me to this one.  The fix really was that simple.  I changed the authentication methods on the ApplicationAdvisor IIS website and was able to login from my desktop.



Now, for the second problem.  Several developers have access to the ApplicationDiagnostics console, and have no problems viewing the application information they have rights to.  However, when they try to login to the Application Advisor, they get the error message below, stating that they are unauthorized.

Which is really strange considering, they can authenticate to the Application Diagnostic console, and can login to the Operations Manager Web Console and view their application.  After spending quite a bit of time looking into SQL Server permissions, and the Application Advisor application pool, I added both the developers to the Operations Manager Report Security Administrators group and that magically solved the issue.

What Not To Do As A Manager – Volume 1

This is the first post in what will be a multi-part series about what not to do as a manager or leader.  For better or worse in the last year I have had the “opportunity” to witness a lot of things that I wouldn’t recommend any manager or leader do if they want to be successful. This particular example applies to any aspect of your life, not just the professional part.

One of the two jobs I worked at during my two year odyssey was a contract position for a large construction firm.  The job was to perform a remote Datacenter consolidation and standardization.  The subject of this post is the Project Manager for said project, we will call him Matt.

My second week there Matt took me to meet another guy on the Infrastructure team that I had been communicating with via email about some System Center Configuration Manager reports he was going to write for the project.  As we were leaving his desk Matt turned around and just flipped the guy off.  Right in the middle of everyone.  I was incredulous.  The guy who got flipped off?  He must have been used to it because he didn’t even acknowledge it.  For my part, I was so stunned I didn’t even say anything.  After about 5 seconds of Matt flipping him off without a response, Matt just walked away.  My only thought was “That was completely messed up.”  Trying to give Matt the benefit of the doubt I tried to convince myself that there must have been an inside joke or something involved.  However, his actions really bothered me and the fact I didn’t even say anything bothered me even more.

Fast forward two more weeks.  I’m sitting at my desk when Matt walks by on his way to lunch.  Neither one of us says anything to the other and right before he walks out the door to go to lunch, he turns back around, walks over and flips me off.  What makes this more incredible is that I shared the cubicle area with the other contractor so he witnessed everything I am about to transcribe.

Me (sarcastic voice):  “Wow Matt, you are really cool!”

*5 second silence while we stare at each other*

Matt:  “Hold on I’ve got something else for you”

*reaches his other hand into his jeans pocket*

*launches the middle finger on that hand*

Matt:  “Boom!”

*5 second silence while I stare at him with a completely unimpressed facial expression*

*5 more seconds*

*5 more seconds*

He doesn’t say a word and just walks off.  For the following three weeks he never said a word to my face despite being my project manager and walking by my desk multiple times every day.  For my part, I never attempted to initiate any kind of a conversation as I had zero desire to ever speak to him again.  I left the contract job within the month.

I never disclosed this information to HR at the company because I knew I would be leaving as soon as possible, should I have done so?  How would you have handled the situation?


My Name Is……

Dear All Recruiters –

My name is Jacob.  Not Jason.   Say it again.  Jacob, not Jason.

You will not find my name on this website, on Twitter, on LinkedIn, on Facebook or anywhere else as Jason so please stop addressing emails and LinkedIn messages to Jason.  If you can’t even get that one “small” detail correct, I have zero interest in communicating with you.