Finding All Client Mailboxes in the Office 365 Partner Portal

Someone at work asked me “how many client mailboxes are we supporting in Office 365?”. It’s easy to tell how many clients you are supporting, as soon as you login to the partner portal it tells you that number. I had never tried to get the number of client mailboxes before, but I strongly suspected that I could do it using PowerShell. Turns out I was right!

The first thing you will need to do is to connect to Office 365 using PowerShell. If you haven’t done this before, go to this page and download the Office 365 PowerShell Module and install it.

After that is done, execute the commands below which will connect you to Office 365. It’s important to note that the credential you use must have delegated admin access to all the client accounts or the steps in this article won’t work.

Once that is done running the commands below will provide a list of all the commands in the MSOnline Module as well as only those commands which contain the word partner. I was hoping there was a cmdlet that would allow me to get all my clients information which I could then iterate through to get a list of all mailboxes.

MSOnlinePartner

The Get-MSOLPartnerContract looks interesting, lets try that.

MSOnlinePartnerContract

Well, that gets me a lot of Tenant ID’s, which doesn’t appear to be useful. Or does it? I know from using Office 365 with PowerShell in the past that there are commands that use the TenantID parameter to retrieve information.

Doing some more investigation I run the series of commands below. First I am storing all the Tenant ID’s in a $Clients variable, and then using Get-Member to see what properties this variable contains. Seeing that one of them is Default Domain Name, I run $Clients.DefaultDomainName which sure enough enumerates all of my client domain names so I can see that they are in fact my clients. I am then storing all the Tenant ID’s in another variable $Tenants which I will be using shortly.

Earlier I mentioned that I knew from past experience that there were cmdlets in the MSOnline module that used TenantID as a parameter. How could I find out what those commands are? By running the command below. I reduced the output by using -Verb Get since I am going to be getting the number of mailboxes.

Scrolling through the list I see that Get-MSOLUser is one of the cmdlets that uses TenantID as a parameter and think that might be a good place to start. I take one of my Tenant ID’s in my $Tenants variable and run the command below.

I was hoping to see that there was a domain or client name property, but unfortunately there is not. So in order to figure that out I ran the commands below. What I am doing here is taking the first user that comes up for the tenant and getting their User Principal Name (UPN). Since each client’s users have the same UPN, this will be OK. After that I am splitting the UPN at the ‘@’ symbol and using that information for the domain. This is definitely not the only way to do this (or probably the best) but it worked for me.

Next I am creating a hashtable with my properties and then creating a new PSObject Object and saving this information to a .CSV file. Notice in the properties section that for the Domain I am having to use $Domain[1] because when it split the UPN at the ‘@’ symbol it split it into two pieces, the section before the ‘@’ and the section after it.

Finally, I have tied all this into a Function I am calling Get-PartnerMailboxes that will iterate through all my Tenant ID’s and create a .CSV listing the domain and total number of licensed users for each tenant.

Warning: Unable to find package provider ‘PSModule’ on Windows 10

Running Windows 10 Enterprise Preview Build 5.0.10158.0

When I run Find-Module, I receive the error message in the screenshot below.
CIX7pMsWEAAgUqK

After bashing my face against the problem for a while, I got some help from Ben Gelens, who provided the answer on how to fix the problem:

Once that is done, close and reopen the ISE (or the Console host) and you are good to go!

Why Should I Attend the PowerShell Summit?

Starting last fall for the European PowerShell Summit and this year for the North America PowerShell Summit all the session recordings are available on YouTube. Because of that you may be asking yourself “Why should I go to the Summit when I can just watch everything online?”. Here are ten reasons, in no particular order other than the order in which my brain dumped them out.

  1. You get direct interaction with the product team. And I am not talking about members of the PowerShell team sitting in a corner not interacting with anyone.  They are there to get as much feedback as possible, learn how people are using it, and to directly interact with members of the community.  This isn’t the Microsoft of 10 years ago or two years ago.  When they say they want your feedback (and not just the good stuff) they absolutely mean it.  Special thanks to Lee Holmes, Michael Greene, Joey Aiello, Angel Calvo, Hemant Mahawar, and Kenneth Hansen for making the trip out to Charlotte.
  2. I don’t care how much you interact with other members of the community over Twitter, Email, Google Hangouts, whatever.  There is no substitute for meeting people face to face, shaking their hand and getting to know something about them besides how they use PowerShell.
  3. And combining #1 and #2, you also get to talk to (and listen to) people talk about how they have solved problems using PowerShell, and what their thought process was around creating that solution. You can then ask people, I have this problem, what would you do to solve it?  One of those conversations alone is worth the price of admission.
  4. You get to watch Mike Robbins “harass” Rohn Edwards all week by telling everyone how great his sessions are going to be and how everyone needs to go to them.  By all accounts they were awesome.
  5. You get to see the look on Dave Wyatt’s face Microsoft announces that “his code” (Pester) is shipping with the next version of Windows Server
  6. You get to have Steven Murawski answer your questions about creating Custom DSC Resources while you are creating them
  7. You get a free Chef T-Shirt, courtesy of Steve.
  8. You get awesome Nano Server and PowerShell stickers courtesy of the one and only Jeffrey Snover.
  9. You get to watch Jason Helmick live and in person talk about how he has his depends on.
  10. You end up finding a bug in Class based DSC Resources that you only found because you participated in the DSC Resource hack-a-thon at the PowerShell Summit.  So make sure you vote on that!
  11. Bonus!  Jeff Hicks gives you a signed PowerShell Deep Dives book and a 30 day Pluralsight subscription to give away at your next user group meeting.
  12. Bonus!  You get to watch Jeffrey Snover demonstrate and talk about a bunch of stuff I can’t repeat or talk about upon fear of death :).
  13. Bonus!  You get to talk to (and listen) to June Blender talk about PowerShell, PowerShell Help, and writing.  Her passion and knowledge around those topics is unbelievable.
  14. Bonus!  You learn how little you really know about PowerShell.  This is a good thing!  This is also something I relearn on a nearly daily basis.

I was also asked by Josh Duffney on Twitter what I thought were some of the “must watch” videos from the Summit.  The lame answer is “everything”, but that’s also not realistic.  If you put a gun to my head and said “you have to pick 7 sessions” here are the 7 I would pick (no particular order).  All the Summit videos can be found in this playlist on YouTube.

  1. Kenneth Hansen & Angel Calvo PowerShell Team Engagement
  2. Don Jones DSC Resource Design
  3. Dave Wyatt on Automated Testing using Pester
  4. Defending the Defenders Part 1 & 2
  5. Debugging
  6. PowerShell Get
  7. Ashley McGlone on DSC and AD
  8. PowerShell v5 Debugging (There is also a session on Debugging PowerShell by Kirk Munro.  These are different)

ValidateSet for a Parameter in a DSC Class Based Resource Fails to Throw Error

While working on a Custom DSC Resource that I started Monday night at the PowerShell Summit I came across some interesting behavior that turned out to be a bug in the WMF 5.0 February Preview. I have logged this issue in Connect, but I wanted to write a blog post to demonstrate what exactly is going on for when someone else runs into this issue. I am just going to the use Custom DSC Resource for creating a Primary DNS Zone that I was working on as the example to demonstrate the behavior.

Here are the properties for the resource. I figured I could just do ValidateSet like I always had done for an advanced function or non-class based DSC Resource.

My Configuration for testing the Resource looks like this:

If I run this Configuration, with one of the appropriate values for ReplicationScope, it works exactly like you would expect it to.

That’s great. But, what happens if I put in a value that isn’t part of Validate Set?

That is clearly not what should happen. You would expect to see an error saying something to effect of “HokeyPokey does not belong to the set “Domain”,”Forest”,”Current”,”Legacy”, it needs to be one of those values”.

If we look at the .MOF file that gets created, this incorrect value also makes it into the .MOF file:

You can tell that it knows something is wrong, because when it runs through Test-TargetResource and Set-TargetResource it doesn’t actually do anything (notice all the Verbose messages that are missing from the previous example), but it also doesn’t error.

So how do we get around this? By using an Enum!

Now, if I try to set the ReplicationScope to HokeyPokey, we get the behavior we would expect.

Creating a Class based DSC Resource using PowerShell

Scenario: I would like to think I am fairly competent at creating DSC Resources using .MOF files. But with the upcoming release of Windows Management Framework 5.0 we can now write DSC Resources using Classes. I have never done anything with a Class. I am going to attempt to figure out how to write a simple Class based DSC Resource.

I am going to start with this TechNet Article, because it was the first thing that came up when I Googled “Create Class based DSC Resource”.

I am going to create a DSC Resource called MyTestClassResource. The first thing I need to do is create the appropriate folder structure, which I do by running the following two commands:

Next, I need to create the Module Manifest.

The .PSM1 file is where I define and create my Class based resource.

Now, let’s get to creating this Resource. I am going to go super simple here and create a Resource that will just ensure that a Folder exists. Yes, I know the File DSC Resource already does this, that’s not the point :). After fiddling around for a little bit here is what my Class DSC Resource “Skeleton” looks like.

The rest of this should be pretty straightforward. Right? The first thing I try to do in the Get section is to test and see if the path exists.

This doesn’t work however because “Variable is not assigned in the method”, whatever the hell that means. Looking at the article it uses a $This object (I have no idea if that’s even the right word) with the variables to do things, so let’s try a different tactic.

And that works fine, so clearly $This is some special thing I need to be using going forward. This should be fun :). Now that I got that working, the rest of this Method was pretty straight forward.

After that, the Test Method is pretty easy as well. However, I have no idea what is going on with the whole Return -not $Item part, I am just following along from the example and hoping that it works.

Onto the Set Method. One thing I noticed when creating this is that I don’t need an Else block with an If statement, which is nice.

With that done I need to re-create the Module Manifest that I made earlier with some important information.

At this point while trying to run Get-DSCResource I realized that my folder structure I created at the beginning was not correct, because I was used to the way I had been doing it when working with .MOF Files. I actually need only this:

And then I moved the .psm1 and .psd1 files into that folder. No extra sub-folder required. This is a win in my book.

Now, when I run Get-Module -ListAvailable, the MyTestClassResource Module is listed. However, when I run Get-DSCResource -Module MyTestClassResource I get a lot of nothing. Weird. Next I try to Import-Module -Name MyTestClassResource and I get this giant bundle of joy.

Uhhh…What? Thinking quickly I decide that it’s probably not going to work to name everything exactly the same. I rename the MyTestClassResource folder to MyTestClassResources and leave everything else the same. And well, I will spare all the error text but that didn’t work at all either. I am not sure how much time I spent trying to figure out what the hell I was doing wrong, but to say it was an exercise in frustration is a massive understatement. No matter what I did I couldn’t get it to recognize a valid Module, let alone DSC Resource. I lost track of all the troubleshooting that I did but here is what my folder structure looks like now that it is working. Note that I renamed my Class based Resource from MyTestClassResource to MyFolderResource.
ClassTest

Now, let’s test a Configuration!

And just to make sure it’s working, let’s set it to Absent

cVSS Custom Desired State Configuration (DSC) Resource

I will keep this pretty short and sweet. I was asked to create a Custom DSC Resource that could ensure that Volume Shadow Copies was always enabled on a drive. I thought it would be pretty easy to do, but turns out it wasn’t nearly as easy as I thought. When you enable VSS through the GUI there are two things that happen. First, VSS gets enabled on the drive with some default settings. Second, two scheduled tasks are created that match those settings to actually create the shadow copies. Because of that I broke this custom resource into two parts. The cVSS Resource just specifies the drive you want to enable, whether or not you want to enable it, what drive you want to store the shadow copies on, and how much space you want the shadow copies to take up. A Configuration using this Resource looks like this:

The cVSSTaskScheduler Resource creates the Scheduled Tasks that create the actual shadow copies on the drive. A Configuration using this Resource looks like this:

All the files you need to use this Resource can be found on my GitHub site. There is an Examples folder that has the two examples above plus a Configuration using both. The Files folder contains two files that just show how I created (and Updated) the Resource as I went through the process of actually getting it to work. I am very new to the whole GitHub thing so if there are bugs, things that I missed, or a better way of doing something and you want to update or change the files let me know and we can certainly do that.

“Undefined Property ConfigurationName” When Starting a DSC Configuration

Update: This issue is resolved in the November 2014 Update Rollup for Windows RT 8.1, Windows 8.1 and Windows Server 2012 R2. Thanks to Dave Wyatt for the heads up!

So, there I was today, working on a Custom DSC Resource and when I ran Start-DSCConfiguration I got this error:

Well, what the hell does that mean? I just did basically the same thing with another Custom Resource the other day, to the exact same machine and didn’t have any problems. After doing a bunch of different things that didn’t work, I goggled the error and came across this excellent article by Mike Robbins.

I had updated a bunch of Modules and the WMF Framework on my Windows 8.1 desktop to prepare to go through the Advanced DSC MVA Series with Jason Helmick and Jeffrey Snover. This is the version of PowerShell on my Windows 8.1 Machine:

And here is the version on my Windows 2012 R2 Server:

If I look at my .MOF file, I have both of the sections Mike talks about in his article. I deleted the lines in the .MOF that said ConfigurationName = “TestVSS” and Name = “TestVSS” and tried to send the Configuration again.
Now, I got a different error, because there are a few other things in the .MOF file that my destination server doesn’t know how to handle. Here is the error:

Looking at my .MOF file, sure enough I have a couple of extra lines in addition to the ones described in Mike’s article:

If I delete those two lines from my .MOF file, I get what I expected to get all along:

Now, that’s great that it’s “working”, but since I am wondering if I update the WMF on the target server to the November WMF 5.0 Install, will my Configuration work without any issues? The answer is Yes, it does work without any issues.

Demo DSC – Part 3

In Part 1 of this series I talked about how I demo’d the building of a Domain Controller. In Part 2 I talked about demoing the building of a Pull Server, an App Server, and then using the two servers to show how a Pull Server works and what needs to be done to make the magic happen. If you didn’t read Part 1, here is the disclaimer:

This was never intended to demonstrate all the features and capabilities of DSC (there’s a lot!), but instead was done to show at a high level the kinds of things that are possible and to start a discussion about where it fits into our organization immediately and going forward

My outline for this part of the demo looked like this:

  1. Build Web Server
    1. Run BuildWebServer Script on the Web Server
    2. Talk about what’s going on while the server reboots
      1. File copy after domain join
      2. Install of Roles and Features, IIS Components
  2. Post Reboot
    1. Show IIS Site(s
      1. Show default as stopped
      2. Show DSCTest Website
    2. Browse to site from App Server – http://<WebServerName>:8080
    3. Break Web Server
      1. Change IIS Binding
      2. Delete WebSiteFiles Folder
    4. Show broken site from App Server
    5. Talk about various ways this could be fixed (Push/Pull)
    6. Run the BuildLabWebServer script on the Web Server
    7. Show working site from APP Server

For comedic purposes, here is what my awesome Microsoft Word Website looked like that I break in this demo:

CrappyWebsiteDSCDemo

 

Here is the Configuration script in its entirety. It’s also available on GitHub.

Demo DSC – Part 2

In Part 1 of this series I talked about how I demo’d the building of a Domain Controller. In Part 2 I am going to talk about how I demo’d building a Pull Server, an App Server, and used the two servers to show how a Pull Server works and what needs to be done to make the magic happen. If you didn’t read Part 1, here is the disclaimer:

This was never intended to demonstrate all the features and capabilities of DSC (there’s a lot!), but instead was done to show at a high level the kinds of things that are possible and to start a discussion about where it fits into our organization immediately and going forward

The outline for this part of my Demo looked like this:

  1. Talk about the purpose of a Pull server (Can also be used to push and write Configurations)
    1. Show how nothing is configured (name, domain, roles/features, etc
    2. Open ISE, Run BuildPullServer Script
    3. Will reboot. While rebooting show the computer account on the DC
    4. Login as domain account
      1. Create share C:\WebServerFiles, share with everyone (explain why we need it later). Explain that this could have been done with DSC, I just choose not to.  This Share will come into play later.
      2. Copy website files to this share (I created a “website” in Word to use with a Web Server, that will come in the last part of this series)

Here is the Configuration script in its entirety.  It’s also available on Github.

With that done I then built what I called an App Server. Don’t think that I somehow deployed an Application using DSC (I didn’t) but with a Web Server the last part of my demo I needed to call it something that sort of made sense so I called it an App Server. The build script for the App Server is below, and you can see that it’s much smaller than the previous two build scripts. In this case I wanted to show a minimal configuration for a build script and then demonstrate the process of configuring the App Server to pull a new Configuration.

Here is the Configuration script in its entirety.It’s also available on GitHub.

With that done, the next step is to create a Configuration on the Pull Server for the App Server to Pull. All this Configuration does is change the TimeZone on the App Server. Nothing fancy here. There are also some other pieces at the bottom of the Configuration script I should talk about. I have hardcoded a GUID for the server in the Configuration. You can either use this one or change it to your own. I am setting the source and destination paths and sticking the GUID onto the end of the .MOF file, which is required when you are pulling a Configuration. This GUID is how the server knows which Configuration belongs to it (as we will see here shortly). I am then copying the file from the source path to the destination path, and then creating a Checksum file for the .MOF (which is also required).

Here is the Configuration script in its entirety. It’s also available on Github.

With that done, we need to do one other thing before this is going to work. I pre-copied various DSC Resources to the Pull Server, so now we need to .ZIP up the XTimeZone resource so that it can be copied to the App Server when it pulls it’s Configuration. You do this by creating a .ZIP file of the xTimeZone Module and appending the version number to the end of it. In this case, my file name after creating the .ZIP Archive is xTimeZone_1.0.0 . This file then needs to be placed in the “$env:PROGRAMFILES\WindowsPowerShell\DscService\Modules” folder, which is the ModulePath we specified in our Pull Server Configuration. Once that is done, we need to run the command below to also create a Checksum file for this Archive.

Next we need to create a Configuration to tell the App Server to Pull it’s Configuration. This is done by changing the Local Configuration Manager (LCM) settings on the App Server. In my demo I built this Configuration on the Pull Server and then pushed it to the App Server. The outline for this part of the demo looked like this:

  1. Create LCM Configuration. Comment out the Set line in the script. Explain the meta.mof file.
  2. Show LCM Configuration on App Server
  3. Show Consistency Task settings (there should be none)
  4. Push LCM Configuration from Pull Server to App Server
  5. Show LCM Configuration on App Server compared to previous
  6. Show Time Zone. Run Scheduled Task.
  7. Watch App Server for Time Change
    a. Change Time Zone again to something totally random
    b. Run Consistency Task again, watch Time Zone change again

Here is the Configuration script in its entirety.  It’s also available on Github.. You should also note in this script that the Configuration GUID from before makes an appearance here as well. This GUID is what tells the App Server which Configuration to look for on the Pull Server.

Demo DSC – Part 1

This is the first in a series of posts outlining how I presented a demo of Desired State Configuration (DSC) for the organization I work for. This was never intended to demonstrate all the features and capabilities of DSC (there’s a lot!), but instead was done to show at a high level the kinds of things that are possible and to start a discussion about where it fits into our organization immediately and going forward.

My demo was done using 4 Server 2012 R2 Virtual Machines on a single VMWare ESXi host. Because this environment was in a lab (with some unique networking challenges) and to make things easier for me during the demo I just copied the set of files from a Windows 8.1 machine on the same network as the host onto each VM individually.  I built and ran this demo using Wave 9 DSC Resources.  I switched to Wave 10 halfway through and had a problem with the xComputerManagement Resource (In Wave 10 it doesn’t properly evaluate the condition of whether or not the Computer Names match or not), and switched back to Wave 9 after that to avoid any further problems.  You will also notice in the script that I hardcoded credentials which is definitely not the recommended way to do it in a production environment.

The first thing I wanted to do was to build a Domain Controller on a brand new domain, that would be the foundation for showcasing other features of DSC in the rest of the demo. My outline for this part of the demo looked like this:

  1. Show New Server Build
    1. Show how nothing is configured (name, domain, time zone, IEESC, IP address etc)
    2. Open ISE, Run BuildDC Script. Show computer rename and restart section.
    3. Will restart – Talk about what just happened.
  2. Continue Server Build Post Reboot
    1. Login after reboot, show post Reboot scheduled task kicking off
      1. Show IP address change
      2. Wait for restart again (Approx 3:15 total at this point)
    2. Login after restart with Domain credentials
      1. Show Firewall Status
      2. Event Log Configuration
      3. Time Zone Configuration
  3. Run entire Configuration again to show nothing happens.

Here is the entire BuildDC Configuration Script in it’s entirety.  It’s also available on GitHub.