Nano Server Management

Where has the time gone? I looked up from my computer and the summer is nearly over! One of the things I’ve been tinkering with as of late with some of my “infrastructure as code” projects is Nano Server. Not only is Nano Server gearing up to be a great Hyper-V host and a cool place to start dabbling in containers, it’s also great server to use when testing deployment scripts because it’s small and deploys quickly. When all I want to do is spin up and tear down to test my templates, I love being able to use a Windows server with a smaller footprint.

With Nano server being “headless”, it only supports remote administration, so this has also lead me to check out all the newish ways we can manage servers remotely. You’ll need to take a few steps so you can remotely manage a Nano server deployed in Azure.

  1. Open NSG on Azure for the Nano Server – If you created a VM from the Azure Portal and accept all the defaults (which include an NSG), that NSG doesn’t open the ports for WinRM by default.  It only opens RDP.  The OS firewall is open to accept WinRM and PowerShell, but the NSG blocks it.  You need to edit the NSG to include TCP ports 5985 (http) and/or 5986 (https) for remote management.
  2.  Add Nano External IP Address as a Trusted Host – Since you’ll be connecting to your VM remotely over the public internet, you’ll need to add that IP address to your trusted host list on your workstation. You can do that via PowerShell or via CMD (just pick one).
    1. winrm set winrm/config/client @{ TrustedHosts="13.88.11.166" }
    2. Set-Item WSMan:\localhost\Client\TrustedHosts "13.88.11.166"

At this point you should be able to remotely connect to your Nano Server using PowerShell. On your workstation, run (replacing the IP address and username as appropriate):

$ip = "13.88.11.166"
 $user = "$ip\sysadmin"
 Enter-PSSession -ComputerName $ip -Credential $user

You’ll be prompted for your password and then you’ll be given a remote PowerShell prompt to your Nano VM. But what if you want MORE than just a PowerShell prompt? What if you want access to event logs? Or some basic performance information? Or dare say, use “Computer Manager”??

You can use Server Manager tools from workstation or you can use the Azure Server Management Tools (and Gateway).

While your remotely connect to the server you want to manage, you may need to make a few other small changes, particularly if your servers aren’t domain joined or are on a different subnet than the machine you are connecting from. I recommend checking out this troubleshooting guide – https://blogs.technet.microsoft.com/servermanagement/2016/07/20/troubleshooting-problems-with-server-management-tools/

If you specify in Microsoft Azure the local administrator account to connect to the managed server, you have to configure this registry key on the managed server:
REG ADD HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System /v LocalAccountTokenFilterPolicy /t REG_DWORD /d 1

If you are connecting from a different subnet:
NETSH advfirewall firewall add rule name=”WinRM5985″ protocol=TCP dir=in localport=5985 action=allow

If you want to use Computer Manager and other common Server Manager tools:
Set-NetFirewallRule -DisplayGroup ‘Remote Event Log Management’ -Enabled True -PassThru |
select DisplayName, Enabled

Happy Remoting!

Advertisement

The Imperfect Lab: Letting Additional Administrators Remotely Connect to Servers

An age-old server administration best practice is to make sure that everyone who is administering servers on your network are doing it with their own “admin” credentials.

Up until this point, I’ve done all my remote Azure sessions (PS-Session) with the built-in administrator account.  This works fine if you are only person connecting remotely to a server. But what if you want to grant others administrative rights to your machine and they would also like to connect remotely?

Your first step would likely be to add them to the local administrators group. Since you’ve already turned on the “remote management” feature for yourself, you might expect this to work out of the box.

But you probably overlooked this little note in the “Configure Remote Management” box when you enabled remote management – “Local Administrator accounts other than the built-in admin may not have rights to manage this computer remotely, even if remote management is enabled.”

That would be your hint that some other force might be at work here.  Turns out that UAC is configured to filter out everyone except the built-in administrator for remote tasks.

A review of this TechNet information gives a little more detail:

“Local administrator accounts other than the built-in Administrator account may not have rights to manage a server remotely, even if remote management is enabled. The Remote User Account Control (UAC) LocalAccountTokenFilterPolicy registry setting must be configured to allow local accounts of the Administrators group other than the built-in administrator account to remotely manage the server.”

To open up UAC to include everyone in your local Admins group for remote access, you’ll need to make some registry changes.

Follow these steps to manually edit the registry:

  1. Click Start, type regedit in the Start Search box, and then click regedit.exe in the Programs list.
  2. Locate and then click the following registry subkey:
  3. HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\policies\system
  4. On the Edit menu, point to New, and then click DWORD Value.
  5. Type LocalAccountTokenFilterPolicy for the name of the DWORD, and then press ENTER.
  6. Right-click LocalAccountTokenFilterPolicy, and then click Modify.
  7. In the Value data box, type 1, and then click OK.
  8. Exit Registry Editor.

Now you will be able to remotely connect and administer your server using PowerShell with any account you’ve give Admin rights too for that particular server.  This would hold true for servers in Azure, as well as servers on your local network.

Special shout out to Bret Stateham for bringing this “remote admin road-bump” to my attention. Sometimes what looks like an “Azure” problem, is really a “Server” feature. 🙂

The Imperfect Lab: A Few VM Manageability Tweaks

Today in the Imperfect Lab I’m going to work on some clean up to improve the manageability of my new domain controllers. Since I have two of them, I want to take advantage of the Azure’s service level agreement.  The only way to ensure that Azure keeps at least one DC running at all times is to create an availability set, which will distribute the VMs within a set across different update and fault domains.

Some notes about Availability Sets – VMs must be in the same cloud service and you can have a maximum of 50 in each set. You will find that your machines are spread across 2 fault domains and upwards of 5 update domains.  Also, avoid creating a set with just one machine it, because once you create a set you won’t get notifications about maintenance regarding those update/fault areas. 

Since my machines have already been created I use the following PowerShell to update them with a set named “ADDC”.
Get-AzureVM -ServiceName “imperfectcore” -Name “dc-cloud1” |
    Set-AzureAvailabilitySet -AvailabilitySetName “ADDC” |
    Update-AzureVM
Get-AzureVM -ServiceName “imperfectcore” -Name “dc-cloud3” |
    Set-AzureAvailabilitySet -AvailabilitySetName “ADDC” |
    Update-AzureVM
If you want a quick gander at all the availability sets that exist in your subscription, run this:
(Get-AzureService).servicename | foreach {Get-AzureVM -ServiceName $_ } | select name,AvailabilitySetName
Since the GUI does hold a fond place in my heart, I do want the dashboard of Server Manager on one of the VMs to show the status of all the servers in the domain.  You’ll notice that if you log into the desktop of one of these newly created servers the “Remote Management” will be disabled.  This needs to be enabled to allow management from other services, so run “winrm quickconfig -q”against each server to turn that on.  You will have to start a PS-Session for each server for that.
Finally, since I expect to reduce the amount of times I’m logging into a machine directly, I’m going to take switch one of the DCs to Server Core and the other to the MinShell format.  These commands do take a while to complete and require a restart to complete the configuration, so don’t panic if you can’t connect to what looks like “running” VMs in Azure for a few minutes after reboot.
For Server Core (from a Machine running the Full GUI):
Remove-WindowsFeature -name User-Interfaces-Infra
Restart-Computer -Force
For MinShell (from a Machine running the Full GUI):
Remove-WindowsFeature -name Server-GUI-Shell
Restart-Computer -Force
With the MinShell installation I will still have access to the nice Server Manager dashboard when I want it and will be able to remotely manage the 2nd domain controller from it.  The list below will show the differences between each of the versions. (Click to make it bigger!)

The Quest for the Perfect Lab

There are a few old sysadmin jokes out there… one that often comes to mind for me these days is the one-liner about how the perfect network is one that no one is on.  But now that I have the luxury of being able to build just about any lab network I want (either in Azure or using Hyper-V) I find myself nearly paralyzed by wanting to build the “perfect” network/lab for my needs.

I start, I stop, I get sidetracked by a different project, I come back to my plan, only to realize I’ve forgotten where I left off (or forgotten where I wrote down that fancy admin password for that VM) and end up tearing it out and starting over again.  The end result is I’m getting no where fast.

I’ve got several MCSE exams in my future that I need to build some things for hands on for.  I have a little internal metric of how I need to improve my PowerShell a bit more.  I have work training items that sort of fit into all this and I keep striving for the perfect lab, the perfect naming system, the perfect password that I won’t forget… well, I guess my “perfectionist” is showing.

It’s a slow week here in the office with the Thanksgiving holiday approaching, so now is the perfect time to sit down with a pen and a paper and really figure out what I’m going to build and what I want to use it for.

Because there is something worse than a network that no one uses.  It’s that network I keep deleting.

Microsoft Top Support Issues: A Compilation

I know sometimes when I troubleshoot server issues, I feel like my issue is one of a kind. A special server snowflake. Though probably it isn’t.

Ever wonder what the most common support issues handled by Microsoft are?

Enjoy perusing the Top Support Solutions Blog! This might just save you some time when you are faced with that next head scratcher. Some of the product lines included (so far) are:

  • Exchange
  • Windows Server
  • Windows 8
  • Lync
  • System Center
  • SharePoint
  • SQL Server

Tips on AD replication. Update lists. DNS optimization. Client activation. STOP Errors. ActiveSync FAQ.  Outlook Anywhere.

There is even a Quick Start for upgrading Domain Controllers in domains with servers older than Server 2008.

This is the repository of tidbits that you are looking for. I started this blog as a place I could collect handy information for myself. I can now sleep peacefully at night knowing there’s something even better.

Go forth and troubleshoot!

Home Tech Support: What happen to my picture thumbnails?

You know you have to do it.  Your parents, your sister, grandpop… they all ask for help with their computers when you are around.  So here’s a quick one I got during my family vacation last week – The thumbnail view of a folder my father kept pictures in wasn’t showing the photos anymore.

Somehow the setting for “Always show icons, never thumbnails” was selected in the Folder Options settings, under the View tab. 

I’m guessing an application change it, though I wouldn’t be totally surprised to find out he was mucking around in there.

Another reason thumbnails might not show up is if the hard drive is mostly full.  Windows will stop generating thumbnails to save space.  That was the first thing I checked, but wasn’t the issue in this case.

Sysadmin Appreciation Day Coming!

Don’t forget that Sysadmin Appreciation Day is on Friday, July 26th!  If you are a sysadmin, this might be a great time to take advantage of a floating holiday you’ve been saving. 

If you AREN’T a sysadmin you might want to take a look around for your nearest helpdesk support person, the guy who last fixed your misbehaving keyboard, the person who helped you solve that problem with your voicemail, the gal who configured your mobile phone with corporate email, or the guy who restored that file you were looking for from last week’s backup.

If you need ideas for how to celebrate or appreciate your neighborhood sysadmin, visit http://sysadminday.com/.

GPT, UEFI, MBR, oh my!

One of my first tasks in my new role is to get started building out my demo laptop. I was issued a nice workstation-grade Lenovo W530. It came preinstalled with the standard Microsoft Windows 8 Enterprise image. As my demo machine, I want a base OS of Server 2012 instead, so I set out wipe the machine and reinstall.

Since the preinstalled OS was Windows 8, the BIOS was configured for Secure Boot from UEFI Only devices. In addition, UEFI is required if you want to use GPT style disks instead of the legacy MBR style disks. So this Lenovo came out of the box configured with every modern bell and whistle.

First things first, I need the Lenovo to boot from USB. So to add that support, I jumped into the BIOS and went to the Boot menu under Startup.  It shows the list of boot devices in the list, but it’s necessary to scroll down some to find the excluded items and add back in the appropriate USB HDD.

The next important decision is whether to install Windows Server 2012 on the GPT disk or use DISKPART to reconfigure it back to MBR. (The DISKPART commands to convert from GPT to MBR and vice-versa are readily available using your search engine of choice.) GPT supports larger disk sizes, but the solid-state disk in this machine isn’t that large, so I could go either way. However, you need to know which you are doing because it determines how you set up your bootable USB and your BIOS.

If you are converting your disk from either MBR or GPT, this will wipe all your data. Make sure starting with a clean slate is REALLY what you want to do.  Also, while my goal is to install Server 2012, these settings and instructions would also apply if you are trying to install a different version of Windows 8.

For Lenovo, the BIOS settings need to go like this for GPT:
  • Secure Boot – Off
  • UEFI/Legacy Boot – UEFI Only

Also, your USB media NEEDS to be formatted FAT32. (This limits the size of a single file on the USB to 4GB, so watch the size of your image.wim file if you customize it.)

And like this for MBR:
  • Secure Boot – Off
  • UEFI/Legacy Boot – Both (with Legacy First)

Your USB media can be formatted NTFS, FAT32 isn’t a requirement.

Take note, if you boot from NTFS media and try to install the OS on a GPT disk, you won’t be able to select a partition to install to, you’ll warned that you can’t install to a GPT disk and have to cancel out of the installer.  Even if you are doing everything correctly from FAT32 media, you’ll get a warning that the BIOS might not have the drivers to load the OS. This warning is safe to ignore – you can still continue through the install process and the setup will create all the necessary partitions to support GPT.

Once all my pre-reqs were sorted out, I reboot the machine and the Server 2012 install files start to load.  After I clicked INSTALL to get things going, I received an error message that read:

The product key entered does not match any of the Windows images available for installation. Enter a different product key.

Well, huh? Now granted, it’s been a while since I’ve attempted to install a Server OS on a laptop, but I surely didn’t miss a place to enter a product key! After some research I found this KB article, where it details the logic for locating product keys when installing Windows 8 and Windows Server 2012.

1.Answer file (Unattended file, EI.cfg, or PID.txt)
2.OA 3.0 product key in the BIOS/Firmware
3.Product key entry screen

Turns out the Lenovo has a preinstalled OEM license for Windows 8 Pro in the firmware. Seems that this saves OEM from having to put stickers on the bottoms of machines with software keys and ensures that the OEM licenses stays with the machine it was sold with. Enterprises that deploy images with another licensing model usually are using some kind of deployment tool and image with an answer file, allowing them to bypass the check against the firmware key.

For my scenario, I wanted the quickest easiest way to provide my key. Turns out the PID.txt file is a no-brainer. You can reference this (http://technet.microsoft.com/en-us/library/hh824952.aspx) for all the details, but all you need to do is create a text file called PID.txt with these two lines:

[PID]
Value=XXXXX-XXXXX-XXXXX-XXXXX-XXXXX

Put your product key in for the value and save it your \Sources folder of your install media. From there it was smooth sailing. After your OS is installed, feel free to turn back on the Secure Boot back in the BIOS.

Your Tier 1 Support is in the Wrong Place

Lots of us started there. Depending on the size of the company you work for, you might still be doing some of it.  Classic Tier 1 support calls are often things like password changes, mouse and keyboard issues, other things often resolved with the end user either rebooting their machine and logging out and back in.

And I’m almost certain that you have the wrong people handling that job, particularly if that person is you or someone one your team who is also responsible for other more technical projects. Stick with me on this for a minute.

I’ve always been a big advocate of the administration departments and the IT departments working closely together and I think that any administrative or executive assistant worth their salt can handle most Tier 1 Helpdesk tickets. Here’s why: they already have their hand on the pulse of pretty major areas of your company and often work directly with executives and managers.

They know what guests and visitors are coming to your location – relevant IT tasks include providing WiFi passwords to guests, explaining how to use the phones and A/V systems and alerting IT ahead of time to guest that need additional resources.

They know when Execs are grumbling about IT issues that can become emergencies (noisy hard drives, problems with applications) and can let IT know ahead of time of pending maintenance issues. 

They can easily be cc’d on emails regarding upcoming password expiration for key executives or managers and make sure those people complete those tasks in a timely manner. Resetting passwords and unlocking accounts is a easy activity that can be delegated out to admin staff with a quick training session. With the proper permissions, you can only give them the abilities they need and nothing more.

Opening tickets, resetting voicemail passwords for phone systems, replacing batteries in wireless mice, swapping out broken keyboards, changing printer toners, basic troubleshooting of printer jams, updating job titles in Active Directory… That’s just off the top of my head.

So what good could come of this? First off, there is a big lack of women in systems admin roles. I was just on a WiT panel last week discussing how to get more women into this role. Turns out, 3 out of 4 women on the panel started in administrative roles. It’s a great way for someone to get a glimpse into the “plumbing” of how systems and network administration keep businesses running. 

Second, most executive assistants are great managers of time and of people, and can often see and understand the big picture of how a company runs, all characteristics that make successful sysadmins. Letting them handle some of the front facing issues can also take away some of the “mystery” of the IT department.

Integrating these two functions can provide a great cost savings to your company, can provide a pipeline of future staff to pull from when you have an opening in the IT department and as a bonus, you’re doing something to help more women begin their technical careers.

So go ahead, steal the receptionist.

Playing IT Fast and Loose

It’s been a long time since I’ve been at work from dusk ’til dawn. I not saying that I’m the reason we have such fabulous uptime, there are a lot of factors that play into it. We’ve got a well rounded NetOps team, we try to buy decent hardware, we work to keep everything backed up and we don’t screw with things when they are working. And we’ve been lucky for a long time.

It also helps that our business model doesn’t require selling things to the public or answering to many external “customers”.  Which puts us in the interesting position where its almost okay if we are down for a day or two, as long as we can get things back to pretty close to where they were before they went down. That also sets up to make some very interesting decisions come budget time. They aren’t necessarily “wrong”, but they can end up being awkward at times.

For example, we’ve been working over the last two years to virtualize our infrastructure. This makes lots of sense for us – our office space requirements are shrinking and our servers aren’t heavily utilized individually, yet we tend to need lots of individual servers due to our line of business. When our virtualization project finally got rolling, we opted to us a small array of SAN devices from Lefthand (now HP).  We’ve always used Compaq/HP equipment, we’ve been very happy with the dependability of the physical hardware.  Hard drives are considered consumables and we do expect failures of those from time to time, but whole systems really biting the dust?  Not so much.

Because of all the factors I’ve mentioned, we made the decision to NOT mirror our SAN array. Or do any network RAID.  (That’s right, you can pause for a moment while the IT gods strike me down.)  We opted for using all the space we could for data and weighed that against the odds of a failure that would destroyed the data on a SAN, rendering entire RAID 0 array useless.

Early this week, we came really close. We had a motherboard fail on one of the SANs, taking down our entire VM infrastructure. This included everything except the VoIP phone system and two major applications that have not yet been virtualized. We were down for about 18 hours total, which included one business day.

Granted, we spent the majority of our downtime waiting for parts from HP and planning for the ultimate worst – restoring everything from backup. While we may think highly of HP hardware overall, we don’t think very highly of their 4-hour response windows on Sunday nights.  Ultimately, over 99% of the data on the SAN survived the hardware failure and the VMs popped back into action as soon as the SAN came back online. We only had to restore one non-production server from backup after the motherboard replacement.

Today, our upper management complemented us on how we handled the issue and was pleased with how quickly we got everything working again.

Do I recommend not having redundancy on your critical systems? Nope.

But if your company management fully understands and agrees to the risks related to certain budgeting decisions, then as a IT Pro your job is to simply do the best you can with what you have and clearly define the potential results of certain failure scenarios.  

Still, I’m thinking it might be a good time to hit Vegas, because Lady Luck was certainly on our side.