Error 3000 – Unable to Register to Advisor Service

I was able to get into the current Limited Preview of the System Center Advisor service. Everything went great with the initial setup and Configuration except for registering Operations Manager with the Advisor Service. No matter what I did, I kept getting Error 3000: Unable to Register to Advisor Service. Please contact the system administrator.

If you Google this error message, you get pointed to this article by Stefan Roth. That article states that the issue is a time sync problem. I went through all the steps, and still no luck. I reached out on Twitter to Daniele Muscetta and we did some initial troubleshooting because he believed the issue was with a Proxy server (which turned out to be correct).

The catch was that we don’t use a Proxy server (anymore). We did last year when I was trying out System Center Advisor, but I ended up uninstalling it and removing all the management packs. After a bunch of initial troubleshooting, we were kind of stuck and setup a meeting to run a diagnostics tool. And that’s when I stumbled onto something magical.

While using the command add-on in PowerShell searching for Override related cmdlets for SCOM, I happened to notice there was a cmdlet called Get-SCAdvisorProxy. I run this command, and sure enough, it has the name of our old proxy server in it! There is also a command Set-SCAdvisorProxy. I ran the command below to set the Proxy to nothing.

Once that was done, I was able to register with the Advisor service and everything is looking great!

Using Start-Transcript in ISE Profile

This week on the Scripting Guys blog they have been doing some posts on PowerShell user profiles and the kinds of things people have in them and what they are for. This prompted me to finally do something I have been meaning to for a while: Add something to my ISE Profile that starts a transcript every time I open ISE.

Some time ago I stumbled upon this module that lets you run Start-Transcript in ISE. I have only used it sparingly however, because I always forget to turn it on. Not after today!

The format for Start-Transcript is pretty simple, all you need to have is a path to store the transcript file in.

Now, this is great and all, but when I open up my ISE Console tomorrow, or the next day, I don’t want it to just keep piling everything into the same text file. I wonder if I can just name the file with today’s date using the Get-DateTime cmdlet?

Well, that’s not real helpful. I spent about the next 10 minutes exploring all the various Methods and Properties of Get-Date, and wasn’t having any luck so I decided to go to the help file. It also just so happens that the last example is exactly what I am looking to do, so I included that as well.

I don’t like the format of that DateTime object, so I decided to investigate what other types of objects there were.

I searched for Format, which right in the help file gave me the link to the MSDN article explaining the different format options. Looking at all the different format patterns, I elected to just start at the type with pattern d, and go from there.

Well look at that! Exactly what I need, except for the /’s because I can’t use those in a filename. Referencing the example in the help file, I change my command to this.

Perfect! Now, let’s tie this all together.

I added the Append parameter because if I open close ISE throughout the day I want it to just keep adding on to the existing file and not overwrite the existing one every time I restart ISE during a given day.

All you need to do to add this to your profile is download the module linked at the beginning of the post, and then add these 3 lines to your profile (you can use notepad $profile from PowerShell to make it easy on yourself).

PowerShell DSC Journey – Day 29

While working on some Demo scripts for a  presentation I am doing on DSC for the Infrastructure team I belong to at work, I ran into a couple of issues and figured I would blog about them.  I haven’t been publishing a lot of blogs lately because I am working on a series of posts about a little project I got myself into with DSC. Here is my Configuration that I am using to Demo with.

It’s just doing some really basic stuff to show some of the things that DSC can do. I randomly selected the Windows Features I am installing, which is where I ran into some issues.

Using the Trace-XDSCOperation cmdlet I then find this in the logs.

Also, since I included a log file for this Windows Feature in my Configuration, here is what a section of that log file contains.

There are a about 30 lines of that, with the progress getting up to 68 a couple of times before it fails and rolls back. Well, this issue appears pretty obvious. I believe it is looking for the Sources\SxS directory on the Server 2012 install media. And it’s telling me that I need to add that source to the Configuration. Can do! In my example I am using an attached DVD as the source, but ideally I think you would want to extract those files to a file server, and copy them over to the server so they are there for any future Windows Feature Configuration changes. Here is what my WindowsFeature PowerShell Resource now looks like.

And Voila!

PowerShell DSC Journey – Day 28

The Windows Management Framework 5.0 Preview is now available.  One of the new features is the PowerShellGet Module.  While working on some blog posts for my DSC series using the xHyper-V Resource and messing around with this module, I came across something I found interesting.  Which led me to some more interesting somethings.  Off we go!

First, I wanted to see what DSC Modules were available (if any) using PowerShellGet.

4 things immediately stood out to me about this. There are DSC Resources for xJea (which I expected to find based on TechEd), xOneGet (!!!) and xAzureVMResources. The other thing is that the version of the Hyper-V module is 2.1.1, which is different than the version I had on my computer which was 2.1.0. Setting aside the xJea piece for a minute, I decided I would install the newer version of the xHyper-V Resource. For this example I intentionally deleted the module first so that I have would have some actual output to show you.

So now, lets go look at it and see what is in it. I originally wanted to see if the version was in fact updated (which it is), and then I stumbled onto this.
dsc47
That isn’t at all what the folder used to look like. It used to just contain the DSCResources folder and the .psd1 file. There is now a licensing document, a .html file containing all the TechNet documentation for the Resource and other fun stuff. Here is the contents of the Misc folder.
dsc48
Say what?!?!?!?!?!?!? I haven’t had a chance to explore any of these, but here is what the VMSwitchGenerator.ps1 looks like.

Here is what the contents of the Examples folder looks like.
dsc49
Here is what the Sample_xVHD_NewVHD.ps1 looks like.

So, needless to say, there is a lot of awesome going on here that I need to explore.

Now, back to xJea. So after seeing this, I was curious, what does the xJea Module include? If you need to install the module (and you are on the WMF 5.0 May Preview) just use this command.

And here is what you get.
dsc50
A docs folder? What is in there?
dsc51
You have got to be kidding me???? Nope. The PowerPoint is the presentation from TechEd and the Word document is awesome, if you are reading this you need to read that document ASAP.
And here is the contents of the Examples folder.
dsc52
The actual demo scripts! Yahtzee!

Now, after this I was like, what the hell, let’s just install all the DSC Resources using PowerShellGet and see what else I get!

Looks good, onto the next step!

Not gonna lie, I was kind of disappointed. Looking through the DSC Resource Module folders, xHyper-V,xJea,xOneGet and xAzureVMResources all have examples, scripts and/or documents that are additional to things that we have seen so far. They all look to be really helpful and built with the idea of making DSC even easier to learn and use. I am really excited about this development.

PowerShell Desired State Configuration (DSC) Journey – Day 27

When we last left off, I was having some issues using injected DSC Configurations with Virtual Machine Manager.  After talking with the DSC Team at the PowerShell Summit I realized one major issue that I had was a bug.  To avoid unnecessary headache I am skipping ahead to step 2.2 of this document, which is without a doubt (in my mind anyways) the best way to do this anyways.

Here is what I have done since the last blog post, in order to make my life easier:

  • I have created a DSC Blank Template in VMM.  This template has only the OS installed (2012 R2) with the latest updates, and the WMF 5.0 Preview.
  • I built a new Pull Server for testing off of this template
  • I built a new VM off of my Blank Template, where I am going to be placing my injected Configuration and the .CMD, and saving it as a template.  I will then deploy my Test VMs off of this Template.  I am going to call this template DSCBase.

Also, I finally figured out why I was having such a problem with my injected DSC configurations from previous blog posts.  You can find that article here.  I am working with the DSC team so see what I can do about that, but really I probably just need to get with my network team and figure out a way to fix that from our side.

Since I don’t want to mess with setting an IP address for obvious reason, I am going to go with this Configuration to put on the template.

Alright, back to Step 2.2 “Inject a Meta-Configuration to Download a Document”. Deviating slightly from the instructions (I am instead going to follow the way the DSC E-Book does it), I am going to generate a new GUID for my Configuration, copy the localhost.mof to the Pull Server with the GUID attached, and create a new DSC Checksum for that file using the steps below.

And here are the files on my Pull Server:
dsc35

The next step is to create a MetaConfiguration that will tell whatever VM is built off of the DSCBase Template to pull its Configuration from the server. Using the Script in the article, this is my Configuration (especially important to note that I replaced the ConfigurationID with the GUID I created).

I build the Configuration, and a localhost.meta.mof gets generated.

The article states that “Please note your metaconfig.mof should only contain MSFT_DSCMetaConfiguration and MSFT_KeyValuePair instances. You may need to manually remove OMI_ConfigurationDocument if it exists.”. I open up the localhost.meta.mof file and sure enough, I have that section in my file (show below) so I remove that section and save it.

Then I copy the localhost.meta.mof file to metaconfig.mof and manually copy it to my DSCBase Template VM under the %systemdrive%\Windows\System32\Configuration folder. I also opened up the metaconfig.mof and made sure it didn’t contain the OMI ConfigurationgDocument section (just because I am paranoid).

Here is what the %systemdrive%\Windows\System32\Configuration folder looks like on my DSCBase Template.
dsc36

For the final two steps, I will attach the unattend.xml file to the Template when I create it. I copy the RunDSC.cmd file I had used previously to C:\DSC on the DSCBase Template VM. I don’t see anything that indicates this file should need to be changed from what I used when testing option 2.1 .

As a side note, I wanted to configure the Event Logs to enable the DSC Analytic and Debug logs but decided I was going to save that for either a Custom Resource or a customized Script Resource inside my Configuration.

Next step is to create the actual VMM Template from the VM itself. When that is done I will deploy a VM from this Template, and hopefully magic happens :).

dsc37

Great success! The Scheduled Task was created,  and the screenshot below shows that the LCM is configured properly, with the correct Configuration ID.

dsc38

I want to see if the new server has pulled it’s Configuration from the Pull Server, so I run the following commands on the Pull Server.

Looks like the last time anything ran on the Pull Server was about 15 minutes ago (from the time I am writing this) and about 10 minutes or some from when the VM was built.

While I was waiting for this, I realized I have a huge problem. The modules on the Pull Server that need to be transferred to the new VM aren’t in the right place, or in the right format. Per the DSC E-Book: “There’s a specific naming convention required for modules deployed to a pull server. First, the entire module must be zipped. The filename must be in the form ModuleName_version.zip. You must use New-DscChecksum to create a corresponding checksum file named ModuleName_version.zip.checksum. Both files must be located in the pull server’s ModulePath.” In somewhat good news, if I open the localhost.mof I created, it specifies the version of the Modules that I need to be present on my Pull Server. I am going to need the following Modules and Versions.

  • PSDesiredStateConfiguration, Version 1.0
  • xComputerManagement Version 1.2

In the interests of time I decided to just download all of the currently released Resources from one .Zip file found here and then just modify the two that I need for this specific example.  One thing I am not sure of is the PSDesiredStateConfiguration Module.  In the .MOF it’s name is just that, without the x.  But the Module itself when you download it is called xPSDesiredStateConfiguration, so I guess we will see what happens.

dsc39

This gets even more confusing.  Doing some digging, the xPSDesiredStateConfiguration resource is on Version 2.0 on TechNet.  Running Get-DSCResource there is both a PSDesiredStateConfiguration Module and an xPSDesiredStateConfigurationModule.  Running Get-DSCResource on my VM I see that the PSDesiredStateConfiguration Module is already there (as it should be on Server 2012 R2).  The very first line in my Configuration is to import the xSystemSecurity Module, so I am going to need that as well.  Interestingly enough, this is what the .MOF file shows for that instance of my Configuration.

The Module name itself is PSDesiredStateConfiguration, but in SourceInfo it says it needs xSystemSecurity. You can color me confused right now. I am going to add that to my zipped Modules, just because I feel like it should be there. The xPSDesiredStateConfiguration doesn’t need to be there, but I am going to leave it there because it’s not going to hurt anything.
dsc40

I am now going to force the server to pull it’s Configuration using this command (again, from the DSC E-Book).

And I get this handy error message!

Which is a load of crap because as my screenshots clearly show, the module is there! And as this screenshot shows from my Pull Server, it is the correct Module (and version).
dsc41

Knowing some issues I had previously, I decide to reboot both the Pull Server and my VM. While my Pull Server reboots I was looking in the DSC Operations Log on the VM and found this message from when I forced the Configuration, so this working as expected at least.

I AM AN IDIOT. I DON’T HAVE A CHECKSUM!!!!!!!!!!!!!!! EVEN THOUGH I WROTE ABOVE THAT I NEEDED ONE!!!!!!!!!!!! ARGGGGGGGGGGGGGGGGG!!!!111!!111! But seriously, it would be nice if the error message said “hey, you don’t have a checksum” instead of that it cannot find the module.
So, after that. I run the commands shown below and have all my Checksum files!
dsc42

Back on the VM I run the Invoke-CIMMethod @params command, and I get some good, and some “bad”. First the good!
dsc43

Well, that means at least UAC got turned off in the Configuration! However, you can also see from the red text behind it that not everything went as expected.

Which is wonderful, because this literally tells me nothing. Also, the LCM is set to Reboot if needed, so it should have rebooted (I am guessing it didn’t because the Configuration failed at some point).

According to the DSC Operations log on the VM, this is what happened.

Well, I only have one RoleResource in my Configuration, so let’s look at that. The C:\Scripts folder was created, but the error states that it needs a file name. And then I see it. My LogPath parameter in my WindowsFeature BITS block just has C:\Scripts, with no file name. I change that line to C:\Scripts\Bits.txt. I then go through all of the same steps I did above to recreate the .MOF, copy it to the Pull Server, and Generate a New-Checksum (I remembered!).

Back on the VM, I force a pull again and………..
dsc44
dsc45

Now, the server didn’t reboot, but it was already waiting for a reboot from before, so maybe that is why? That is something else I will need to clarify with the DSC team. But for now, I am happy! After the reboot the VM was also joined to the domain, just as it should have been!

PowerShell Desired State Configuration (DSC) Journey – Day 26

Today I am doing some further testing for my own benefit and the DSC Team wanted me to try something as well.  It occurred to me last week at the PowerShell Summit after talking with some people that the Local Configuration Manager on the VM I am using is set to the defaults, which means it doesn’t reboot if needed.  Maybe this is the cause of all my problems so far?

Here is what I am going to test using the Configuration below:

  • I am going to push that Configuration to the VM first, to ensure that it works (after taking a snapshot)
  • Revert the snapshot, inject the .MOF file using default LCM settings, save it as a template and build a VM off of it.  See if it renames the computer, and joins the domain.  This should fail.
  • Rebuild the template with the same injected .MOF file, but configure the LCM to reboot if needed.  Build a VM off of this template, it should rename the computer and join the domain.
  • If for some reason that doesn’t entirely work, I may need to also set the LCM Configuration Mode to Apply and Autocorrect.

Ok. I copied the .MOF file to this VM and run it locally, and got the output below.

DSCTerminatingErrorDefaultLCMConfig

One thing I should mention is that the VM’s I am testing with are getting a DHCP address from the domain. I get the same result when I set the LCM to RebootIfNeeded = $True. What’s happening is that I am setting the IP address, and then it is losing it’s network connection entirely (the Ethernet icon in the bottom right turns to a warning sign, even though the IP is set correctly). Doing some digging in Failover Cluster Manager, come to find out that the VLAN the network adapter defaulted to was not the correct one (and as far as I know there isn’t a way in VMM to set the VLAN, only the virtual switch. Need to do some research on that).

This VLAN that is helping set the DHCP address, doesn’t work when I set the IP. After I set the IP, the VLAN on the Network Adapter setting needs to change so that it can contact the domain. This is going to be a problem for us, and I am not sure how to work around it.

Here is what does work to get this Configuration working:

  • Set the network adapter VLAN to the DHCP VLAN
  • Run my Configuration.  It will set the IP, the rest will fail (can’t contact the domain) with the same terminating errors as above.
  • Change the VLAN to the one that it can now contact the domain with (with its new IP).  I have to restart the VM for this change to pick up.
  • Run the Configuration again (with default LCM settings).  This works like a charm.  It changed the computer name and joined the domain, now it’s just waiting for a restart  and says so.
  • Restart the computer, rename the computer to something random and disjoin it from the domain.  Set the LCM to RebootIfNeeded = $True.  Run my Configuration and….CUE THE MUSIC OF ANGELS SINGING!  It happened too fast for me to get a screenshot, but it came up and said “DSC Needs to Restart Your Computer.  Signing Out”.  That lasted for about 5 seconds and then the reboot happened.  Magical!

So, to summarize, if the LCM is set to reboot (either false or true), and you don’t have any quirky/weird/normal? network setup like I have, then there shouldn’t be any issues joining the domain or renaming the computer.  I will be passing these findings on to the DSC team to see what they have to say about it.  I expect I may have to write a custom resource or two (to change the VLAN on the network adapter, and to restart the VM after doing so) before continuing on with the Config.  I am going to be testing using a Pull Server configuration next (injecting a Configuration to set the LCM) but am going to run into the same issues with the VLAN stuff.

What I Learned From the Microsoft DSC Team at the PowerShell Summit

If you have been following my blog at all, you know that I have been using this article to do some testing with Virtual Machine Manager using injected DSC Configurations.  You will also know that I have had quite some difficulty getting things to work, and I couldn’t really come up with any explanations why.  I was able to spend some time at the PowerShell Summit this week speaking with members of the DSC team, and found out quite a few things which I have outlined below.

  • If you inject a simple Configuration that does things that require no reboots and it works, everything will work fine
  • If your Configuration, at any point during the first time it runs, fails, you are screwed.  The pending.mof file gets deleted, and nothing you do can get DSC to recognize it again.  This was the big issue I ran into, and it turns out that it is a bug.  They know about it and are working on it.
  • Similar to those lines, if your Configuration requires a reboot, you are also screwed.  While that article explicitly says that “feel free to replace this configuration with your own. The xComputer page has more samples for common tasks including renaming your computer, joining a domain, etc.” don’t do that.  Most of those tasks require a reboot.  What will happen is your Configuration will run, and in my case I was trying to rename the computer, then join the domain.  The computer rename requires a restart.  After it renames the computer, it just sits there and does nothing (remember, it’s running as a scheduled task in the background), waiting for a reboot.  If you forcefully restart the computer, the Configuration fails and you are screwed again.  I wonder if you set the Local Configuration Manager to apply and autocorrect, and to restart when needed if it would get past this issue.  I haven’t tried this yet, but will be doing so this week (maybe even today!)
  • Obviously, you can get around all of these issues just by using a Pull server and having your injected DSC Configuration configure the LCM and then pull it’s Configuration from the Pull Server.  This will allow it to reboot whenever, and then pull the Configuration again and again until everything is set the way it is supposed to be.
  • In the Scheduled Task the .CMD file is creating, there is this line:  Arg @{Flags = [System.UInt32]3 }'” .  There are 3 possible values for the end of that line.  The 3 tells the scheduled task that it is a bootstrap operation.  Following up on above, once the pending.mof fails, something with DSC changes this value to a 2.  You cannot change it back to a 3.  It will let you change it, but as soon as you say Apply and OK, if you go back and look at it it’s a 2 again.  I even deleted this task entirely and recreated it using the .CMD, and I still couldn’t get it to take anything but a value of 2.  In talking with the team this is evidently how it’s supposed to work.  The 2 refers to the DSCRestartBoot Task, and the 1 refers to the Consistency check Scheduled Task.
  • Not related to the rest of this, but I was also told (not by a member of the DSC team) that if you set the IP Address, Gateway, Subnet etc using DSC, and then remove just the Gateway, that when Configuration runs again, it won’t change the Gateway back to what it’s supposed to be.  I haven’t tested this myself, but I have no reason to believe they were making this up.

PowerShell Summit 2014 Recap – Day 3

If you missed the Day 2 Recap, you can find it here.

Session #1
Using PowerShell to Configure Secure Environments and Delegated Administration
Mark Gray and Someone (Schedule says Kenneth Hansen but he couldn’t make it and someone else came, and I forget their name)

This was the most amazing demo I saw all week.  This presentation focused on creating a “Safe Harbor” inside a corporate domain using PowerShell Desired State Configuration and the JEA Toolkit.  In about 30 minutes, the presenters were able to:

  • Create a new domain with a one way trust back to the corporate domain
  • Create and configure a DC in this domain
  • Create and configure a DSC Pull Server
  • Create a JEA Management Server (Jump Box)
  • Create 3 File Servers that pulled their Configurations from the Pull Servers and configured themselves
  • Part of that Configuration was the creation of secure remote endpoints using JEA and the configuration of the local admin accounts

Other Notes:

  • Blog post on securing credentials in Desired State Configuration is here.
  • All the scripts and things in the demo they used they are planning to release in 2-3 weeks (hopefully) once they get the script cleaned up
  • They used a DomainTrust resource which is not yet available (might have said that it was created for this demo only)
  • For every server they used the same .VHD that had WMF 5.0 and Server 2012 R2 with all the latest updates
  • Using endpoint configurations you can restrict parameters of commands (ie, when someone uses the ComputerName parameter you can restrict the Computer Names they can put in there)

Session #2
PowerShell and the Web – Leveraging Web Services with PowerShell
Trond Hindenes

This was another really interesting session.  Most of which was way over my head and out of my realm of expertise (by a lot) so I don’t have a lot to say about it other than that.  It was funny listening to him go on a mini rant about using JSON vs XML.

Session #3
Monitoring Using PowerShell
Josh Swenson

This was a session full of nothing but PowerShell scripts he uses to help monitor his environment.  He also talked about how those scripts came to be, what he used them for, and a basic run down of how they were used.  There was some good discussion at the end about how people use PowerShell to monitor their own environments.

You can find all the scripts he used for the presentation here.

Session #4
Detailing Your Objects
Kirk Freiheit

This was yet another super awesome presentation.  This presentation was about how to format your objects using the PowerShell and the formatting files.  Kirk went into a lot of examples and offered great detail and explanation into how the formatting process works.  Really, really good session.

Notes:

  • If you have 4 or less properties, you are going to get a table
  • If you have 5 or more properties, you are going to get a list
  • Kirk was using a Start-Demo PS Module to give his presentation.  It was really slick and I will definitely be using that.  You can find it here.
  • He also mentioned the EZ Out Module to help you write your own PS Format Files.  This is something I will also be trying out.  You can find it here.
  • There is a hidden property  (this came from an audience member) that will allow you to get all the PSTypeNames at once for an object.  You use it by doing something like ( Get-Computer | Select Blah, Blah2, Blah3).PSTypesName

Session #5
Scripting Best Practices
Ed Wilson

Another excellent presentation full of laughs.  Light on the PowerPoint slides, he talked about what he considers to be Best Practices for working with PowerShell Scripts.

Notes:

  • Best practices are situational
  • Ask yourself, what is the point of the script?  Will I ever use this again?  If the answer is no, consider whether a true script is really needed.
  • Key to writing scripts is readability.  Saves everyone time.
  • Tip:  Add Start-Transcript to your profile
  • Avoid using backticks ( ` ) whenever possible
  • Ed uses lots of variables in his scripts to keep his lines short
  • Learn how to use the Debugger!  Scripting Guy blog post on this can be found here.
  • Write Functions.  They are the fundamental building blocks for re-using code
  • Don’t write Templates, create Snippets instead
  • Advanced Functions act like cmdlets
  • Regular functions should return objects and be simple (perform one task)
  • Dude, modules are key!  Modules can be shared and re-used.
  • Should use versioning and source control with Modules.

Session #6
Monad Manifesto Revisted
Jeffrey Snover

It’s amazing how much of what Jeffrey originally envisioned all those years ago came to be.  I have never heard any of the stories about how it got started, so it was really cool to hear him talk about the challenges they had getting PowerShell off the ground and how long it took.  Amazing to think how far it has come in such a short time.

Notes:

  • Originally founded as a way to kick start development in India.  This didn’t end up working well.
  • Manifesto was specific enough to give direction, vague enough to empower innovation
  • Focus is on “glue language” and empowering people
  • The number of “click next” admins is very large still, DSC is their last hope
  • Pay attention to OneGet at TechEd.  Something is coming.
  • How do we get developers to start using PowerShell?
  • Microsoft’s overall GUI strategy is an exercise in incoherence

Future of PowerShell:

  • Faster Cycles -> Cloud Cycles
  • More Community Engagement
  • Developers and DevOps
  • Business value increases with an increase in the consumption of computing
  • Minimize effort and risk to consume tons of computing
  • Prediction:  Open Source PowerShell (some day in the future)

 

 

PowerShell Summit 2014 Recap – Day 2

If you missed the Day 1 recap you can find that here.  Onto Day 2!

Session #1
On the Job:  Putting PowerShell Jobs to Work
Jeff Hicks

Really great session here.  I learned a lot about scheduled jobs, why you would want to use them, and how they are different from using Task Scheduler to run a scheduled job.

Notes:

  • Scheduled jobs are separate from the Windows Task Scheduler
  • Found in Microsoft > Windows > PowerShell > ScheduledJobs folder in Task Scheduler
  • Jeff mentioned that there is an open issue on Connect about using alternate credentials in Scheduled Jobs.  I am pretty that issue is this one here.
  • Jeff also mentioned that he has a Scheduled Jobs module.  I tried to locate this but apparently my GoogleFu is not strong.  If anyone has a link to it let me know and I will update this post.
  • It is easier to unregister an existing job and re-register it than it is to modify the existing job
  • Recommended to use a specific service account for every job

Session #2
7 Secrets of CIM
Brian Wilhite

Not a lot to say about this session.  It was definitely interesting, but I didn’t take a lot of notes on it.  I also think the title is misleading, because I wouldn’t say any of the stuff shown was a “secret”.  We were also only able to get through 4 of the 7 “secrets” before we ran out of time.

Notes:

  • Mentioned Richard Siddaway’s book on PowerShell and WMI as a great learning resource.  That can be found here.
  • PowerShell help topic on About_Splatting which can be found here.
  • Use as many filters and parameters as you can when querying WMI/CIM to help reduce the number of results

Session #3
Hyper-V and WMI (When the Hyper-V Module is not enough)
Aleksandar Nikolic

This was a super interesting session that I also didn’t take a lot of notes on.  It was really cool to see the Virtualization WMI namespaces and how that could be used with Hyper-V.  It is certainly not anything I would have ever thought of one my own.

Notes:

  • In Server 2012 (and higher) the namespace is now root/virtualization/v2
  • Anything less than Server 2012 the namespace is root/virtualization/v1

Session #4
Everything you Always Wanted to Know About Implicit Remoting (But Were Afraid to Ask)
Aleksandar Nikolic

This seriously blew my mind.  I had no idea how implicit remoting worked or what was even possible with it.  Quick and dirty rundown of how it works.

  • Create a new-pssession to a server with a module you need (think Active Directory)
  • Load the module into the session
  • Import-PsSession into the local session
  • This allows commands to be run locally that are executed on the remote session
  • This importing of the session is stored in a temporary module in PowerShell
  • This session can then be exported to your PowerShell profile and used later
  • Formatting files are not transferred locally as part of the import process

Session #5
Working with System Center 2012 Orchestrator and Windows PowerShell
Sean Kearney

First, I want to give major props to Sean.  He had quite the experience making it out to the PowerShell summit and did a great job with this presentation.  This presentation was all about how to use PowerShell scripts inside Orchestrator runbooks.

Notes:

  • Using PowerShell inside Orchestrator can augment what Integration Packs are missing
  • Basic process is to build a Runbook, create a script, add to .NET task in Runbook (PowerShell type), integrate Orchestrator variables as needed, publish or subscribe to the Orchestrator databus as needed
  • No arrays allowed in PowerShell scripts in Orchestrator
  • There is no Undo in Orchestrator!  Make sure to check out your Runbooks, or better yet copy and paste them into a different editor to edit them in
  • Actions -> Export Runbook is your friend!
  • Running a PS Script in the Runbook Tester only will tell you there was an error, no real information
  • Look on right side of Runbook Tester under Variables for issues with what Orchestrator thinks are “phantom variables”
  • Sean has created a page with links to System Center 2012 Orchestrator Resources here.
  • Also make sure to check out the Community SCORCH project on CodePlex here.

Session #6
Empower Your Help Desk w/ PowerShell
Jason Yoder

This was another great presentation, and if you haven’t met him or seen him in person, he is very high energy.  This centered on how to talk to your help desk about what they need, and how to create a GUI that gives the Help Desk what they need without overcomplicating things.  Jason touched a lot on his experiences doing this and even demo’d a GUI he has been building that looked really nice.

Notes:

  • When talking to your Help Desk, remember that you have 2 ears and 1 mouth for a reason.  Listen to what they are saying!
  • Ask them what they need
  • What information do they need?

At 4 PM members of the PowerShell DSC team came by and did a bunch of lightning demos about things they are working.  Saw a lot of cool stuff, but far and away the highlight was using PowerShell Desired State Configuration to configure a Linux machine.  Yes, this actually happened.  I Saw it with my own eyes and could barely believe it.  And if they can configure Linux using Desired State Configuration, I have some guesses about what is coming in the future and am super excited about the possibilities.

Session #7

 

 

PowerShell Summit 2014 Recap – Day 1

Well PowerShell Summit 2014 is in the books, and wow, what an event. I feel like I could write an entire blog post about how great the food alone was. If you didn’t make it to the event, I cannot express strongly enough that you need to do everything in your power to get to PowerShell Summit 2015 in Charlotte, NC. Start planning now. Start talking to your boss(es) now. Halfway through the very first day I felt like I had already gotten my money’s worth. Not only was the content of every session fantastic (and in most cases mind blowingly awesome), but the connections you make with people will pay for the trip itself. It was awesome to be able to meet and speak with people I had only previously known from reading their blogs, interacting with them on Twitter, or watching videos of presentations they had done on YouTube.

On to the recap!

Session #1
PowerShell Just in Time / Just Enough Admin – Security in a Post-Snowden world
Jeffrey Snover

This session was a tremendous way to get the week started. Just Enough Admin (JEA) is a PowerShell Toolkit focuses on securing your environment by reducing exposure to admins and administrative accounts. One of the things Jeffrey talked about what was an NSA document exposed by Snowden that showed the NSA was actively targeting Systems Administrators. You can find the document itself here and a breakdown of that document here.  The JEA Toolkit allows you to reduce the number of admin privileges and the scope of those privileges by being able to perform admin tasks without being an admin.

Briefly this is how it works:

  • JEA Toolkit allows you to create remote endpoints on servers with a specified set of abilities (restart services, run certain commands, etc.)
  • These endpoints create a local admin account with admin access that anyone that connects to that endpoint runs as when performing tasks
  • This local admin account has a 127 character password that is reset nightly (or more often if you like, however each time its reset requires a WinRM restart.  This will be fixed in a future release).

Next Steps:

  • Works Hours – Who can access which endpoints when
  • Work Tasks – One shot work hours
  • 2 Factor Authentication (I am pretty sure this is right, my notes and handwriting on this are a little sloppy)
  • DSC Driven Safe Harbors and Jump Boxes (more on this in a different session recap below, but HOLY CRAP!)
  • GUI Tools/Toolkits over JEA Endpoints
  • Approve users for a specific endpoint for a specific time frame (ie 5 minutes to restart a service)
  • Collect logs of a JEA session for an audit trail

Session # 2,3,4
The Life and Times of a DSC Resource
Building Scalable Configurations
Patterns for Implementing Configuration with a DSC Pull Server
Steven Murawski

Since these are all about DSC I am just going to lump them all in together. Having all three of these sessions back-to-back-to-back was really beneficial because they all built on each other. It was awesome to see the real world Configurations Steve is using at Stack Exchange and just as importantly, the process he went through to get those Configurations to where they are today.

Some notes from the session(s):

  • DSC Resources should be Discrete, Flexible, Resilient, Idempotent, Chatty (in the logging sense)
  • Use and love Test-DSCResource when building your own Resources
  • Friendly name of a Resource can (and probably should) be different than the Resource name
  • Writing your own Resources requires debugging and error-handling.  DSC Resources are not interactive.  Write-Verbose is your friend.
  • DSC Resources run in the System context
  • Every Configuration he uses has a ConfigData Hashtable in it’s own .ps1 file
  • You can filter your AllNodes data like Node $AllNodes.Where{$_.Role -Like WebServer}.NodeName
  • Composite Resources are key!  Helps to streamline the creation of the .MOF document
  • Considerations for Implementing a Pull Server Environment:  Build Script(s), Source Control, Build Servers, Operations, Logging
  • There are modules on GitHub he has created to speed up and streamline the process of creating, building and deploying Configurations.  You can find those here.

Session #5
Using PowerShell Workflows
Trevor Sullivan

I personally didn’t see anything here that I hadn’t seen or heard before, so I don’t have much to say about it. Some notes that I did write down were:

  • Only supported in Version 3.0 or Later
  • Remoting Enabled requires ports TCP 5985 and TCP 5986 (For SSL)
  • Can be setup to use SSL (which from the way he talked about it, sounds painful)

Session #6
SCOM – PowerShell Goodness
Jeff Truman

If you use SCOM on a regular basis there wasn’t anything new here.  However!  It was worth going because one of the attendee’s (I don’t know who it was) said that using Active Directory integration that you can install the SCOM agent on a template, and when the machine comes online the agent will show as Managed, and not as a manual agent install you need to approve.  I need to figure out how to do this!  If anyone has information on this I would like to speak to you about it :).

Session #7
Proper Tooling through PowerShell
Jim Christopher

This session was great.  Jim is a great speaker and was able to present using a lot of humor and a real world example that was easy to follow and he explained how it applied to everyone in the room.

Some takeaways from his presentation:

  • Tools should assume batch operations, not single ones
  • Do not assume a human presence.  That is, don’t assume someone is sitting there waiting to put in input or respond to the tool
  • His Entity Shell Module that he created and used for this presentation can be found here.

He also had a hilarious exchange with Steve Murawski.  Jim kept using ForEach in his example and Steve commented that he “died a little every time you use ForEach”.  Jim responded by typing out a bunch of ForEach code blocks on the screen which got a laugh from everyone.

That’s it for the Day 1 recap.  Day 2 coming later!