Moving Up to RTM for Windows 7

Finally found a moment to install the RTM version of Windows 7 Ultimate on my netbook. I know there was some grumbling on the Internet about how one can’t directly upgrade from the RC to RTM of Windows 7, but I hardly found it a problem. I’m a big fan of clean installs.

First off, I used a USB key for the install instead of the DVD, which completely sliced of the time it took for the installation down to about 45 minutes. For tips on how to set your USB key up for this task, check out this TechNet Tip on Using a USB Key to Install Windows 7.

I did backup all my personal documents before running the installation, but counted on the fact that the “windows.old” directory would have everything I’d want to transfer. Sure enough, it only took a couple clicks to return my documents, photos and music back to their rightful place. I had to handle a few driver issues again (see my previous post about installing windows on the Samsung NC10) so could get my function keys working properly. But since I had downloaded those before, they were also in the “windows.old” directory.

The only thing left after that was the applications I use on a regular basis. The majority of the applications I use are web-based, so I was left with reinstalling AV software, iTunes, Firefox, OpenOffice and a few other minor items. Once that was done, I was back in action.

I admit I didn’t do everything all in one setting, but overall I don’t think it would have taken me more that 2.5 hours from start to finish if I had. Not too shabby.

Restoring ImageRight in the DR Scenario

Our document imaging system, ImageRight, is one of the key applications that we need to get running as soon as possible after a disaster. We’ve been using the system for over 2 years now and this is the first time we’ve had a chance to look closely at what would be necessary in a full recovery scenario. I’d been part of the installation and the upgrade of the application, so I had a good idea of how it should be installed. Also, I had some very general instructions from the ImageRight staff regarding recovery, but no step by step instructions.

The database is SQL 2005 and at this point it wasn’t the first SQL restoration in this project, so that went relatively smoothly. We had some trouble restoring the “model” and “msdb” system databases, but our DBA decided those weren’t critical to ImageRight and to let the versions from the clean installation stay.

Once the database was restored, I turned to the application server. A directory known as the “Imagewrt$” share is required as it holds all the installation and configuration files. We don’t have all the same servers available in the lab, so we had to adjust the main configuration file to reflect the new location of this important share. After that, the application installation had several small hurdles that required a little experimentation and research to overcome.

First, the SQL Browser service is required to generate the connection string from the application server to the database. This service isn’t automatically started in the standard SQL installation. Second, the ImageRight Application Service won’t start until it can authenticate its DLL certificates against the http://crl.verisign.net URL. Our lab setup doesn’t have an Internet connection at the moment so this required another small workaround – temporarily changing the IE settings for the service account to not require checking the publisher’s certificate.

Once the application service was running, I installed the desktop client software on the machine that will provide remote desktop access to the application. That installed without any issue and the basic functions of searching for and opening image files were tested successfully. We don’t have the disk space available in the lab to restore ALL the images and data, so any images older than when we upgraded to version 4.0 aren’t available for viewing. We’ll have to take note of the growth on a regular basis so that in the event of a real disaster we have a realistic idea of how much disk space is required. This isn’t the first time I’ve run short during this test, so I’m learning my current estimates aren’t accurate enough.

Of course, it hasn’t been fully tested and there are some components I know we are using in production that might or might not be restored initially after a disaster. I’m sure I’ll get a better idea of what else might be needed after we have some staff from other departments connect and do more realistic testing. Overall, I’m pretty impressed with how easy it was to get the basic functionality restored without having to call ImageRight tech support.

Failed SQL Restores That Actually Succeed

This week’s adventure in disaster recovery has been with one of our in-house SQL applications. The application has several databases that need to be restored and we find that the Backup Exec restore job is reported as having failed with the error of “V-79-65323-0 – An error occurred on a query to database .” This error doesn’t prevent SQL from using the databases properly and hasn’t appeared to affect the application.

Once the job completes Backup Exec also warns that the destination server requires a reboot. We are speculating that Backup Exec is unable to do a validation query to the restored database due to the need for the reboot, so the error is somewhat superfluous.

We are going to experiment a bit to see if turning of the post-restore consistency checks eliminate this error in the future, but for the moment we just opted to note the error in our recovery documentation so we don’t spend time worrying about during another test or during a real recovery scenario.

We’ve also found that for some reason, it’s very important to pre-create subfolders under the FTData folder before restoring the databases. If these folders related to the full text index aren’t available the job will fail, too. This has required our DBA to write some scripts to have available in the event of the restore to create these directories, as well as drop and recreate the indexes once everything is restored.

While I appreciate learning more about the database backend of some of our applications, I’m so glad I’m not a DBA. 🙂

Paper vs. Electronic – The Data Double Standard

One of the main enterprise applications I’m partly responsible for administering at work is our document imaging system. Two years have passed since implementation and we still have some areas of the office dragging their feet about scanning their paper. On a daily basis, I still struggle with the one big elephant in the room – the double standard that exists between electronic data and data that is on paper.

The former is the information on our Exchange server, SQL servers, financial systems, file shares and the like. The the latter is the boxes and drawers of printed pages – some which originally started out on one of those servers (or a server that existed in the past) and some which did not. In the event of a serious disaster it would be impossible to recreate those paper files. Even if the majority of the documents could be located and reprinted any single group of employees would be unable to remember everything that existed in a single file, never mind hundreds of boxes or file cabinets. In the case of our office, many of those boxes contain data that dates back decades, containing handwritten forms and letters.

Like any good company, we have a high level plan that dictates what information systems are critical and the amount of data loss that will be tolerated in the event of an incident. This document makes it clear that our senior management understands the importance of what the servers in the data center contain. Ultimately, this drives our IT department’s regular data backup policies and procedures.

However, IT is the only department required by this plan to ensure the recovery of the data we are custodians of. What extent of data loss is acceptable for the paper data owned by every other department after a fire or earthquake? A year of documents lost? 5 years? 10 years? No one has been held accountable for answering that question, yet most of those same departments won’t accept more than a day’s loss of email.

Granted, a lot of our paper documents are stored off site and only returned to the office when needed, but there are plenty of exceptions. Some staffers don’t trust off site storage and keep their “most important” papers close by. Others in the office will tell you that the five boxes next to their cube aren’t important enough to scan, yet are referenced so often they can’t possibly be returned to storage.

And there lies the battle we wage daily as the custodians of the imaging system, simply getting everyone to understand the value of scanning documents into the system so they are included in our regular backups. Not only are they easier to organize, easier to access, more secure and subject to better auditing trails, there is a significant improvement in the chance of the survival when that frayed desk lamp cord goes unnoticed.

Windows Server July 2010 Support Changes

On July 13, 2010, serveral Windows Server products will hit new points in their support lifecycle. Windows 2000 Server will move out of Extended Support and will no longer be publicly supported. Windows Server 2003 and Server 2003 R2 will be moving from Mainstream Support to Extended Support. Extended Support will last another 5 years.

This forces a new deadline on the some of the improvements that need to be planned at my office. Our phone system and our main file server are still operating on 2000 Server. I have been planning to upgrade the phone system for a long time now, but it continually gets pushed back due to other more pressing projects. Our file server is an aging, but sturdy, HP StorageWorks NAS b3000 – “Windows-powered” with specialized version of 2000 Server. Both deserve more attention than they’ve been getting lately, so now there is a reason to move those items higher up on the list.

For more information about these support changes, check out “Support Changes Coming July 2010” at the Windows Server Division Weblog.

More Windows 7 Beta Exams

There are two new Windows 7 beta exams available for a short time. As with most beta tests, you won’t find any official study material and likely won’t have enough time to read it anyway. However, if you’ve been using and testing Windows 7 since it’s beta days, it’s worth a shot to take one of these exams.

The promo code will get you the test for free and you’ll get credit for the real deal on your transcript if you pass. Seats and time slots are VERY limited, so don’t waste time thinking about it too long.

71-686: PRO: Windows 7, Enterprise Desktop Administrator
Public Registration begins: September 14, 2009
Beta exam period runs: September 21, 2009 – October 16, 2009
Registration Promo Code: EDA7

When this exam is officially published (estimated date 11/16/09), this exam will become the official 70-686 exam, which is one of two exams needed for the the MCITP: Enterprise Desktop Adminstrator 7 certification.

71-685: PRO: Windows 7, Enterprise Desktop Support Technician
Public Registration begins: September 14, 2009
Beta exam period runs: September 14, 2009 – October 16, 2009
Registration Promo Code: EDST7

When this exam is published (also on 11/16/09) as 70-685, it will be credit toward the MCITP: Enterprise Desktop Support Technician 7 certification. This certification isn’t listed yet in the MCITP Certification list but I suspect it will be paired with the 70-680 exam, much like the Enterprise Desktop Administrator 7 certification.

Information about these and other exams can be found at Microsoft Learning.

Restoring Exchange 2003

After getting Active Directory up and running in my disaster recovery lab, my next task was to restore the Exchange server. The disaster we all think of most in San Francisco is an earthquake, which could either make transportation to the office ineffective or could render the office physically unsafe. Email is one of our most used applications and in a disaster we predict it will be the primary means of communication between users working from outside of the office. Assuming a functional Internet backbone is available to us, email is will likely be the fastest way to get our business communications flowing.

Restoring Exchange 2003 is a straight-forward process, you have all the previous configuration information available to you. In our DR kit, we have a copy of Exchange 2003 and the current Service Pack we are running. We also have a document that lists out the key configuration information. Before you restore, you’ll want to know the drive and paths for the program files, the database and the log files. If the recovery box has the same drive letters available, the restoration is that much smoother when you can set the default installation paths to the same location and ensure that you’ll have the necessary amount of space free.

It’s important to remember to install Exchange and the Service Pack using the /DisasterRecovery switch. This will set up the installation to expect recovered databases instead of automatically creating new ones. I had manually mount the databases after the restoration, even though I had indicated in the Backup Exec job to set the databases to be remounted when the restore was complete. Microsoft KB 262456 details the error event message I was getting about the failed MAPI Call “OpenMsgStore”, which was a confirmed problem in Exchange 2000/2003.

Don’t Overlook the Metadata Cleanup

It seems inevitable that while restoring Active Directory in a disaster recovery scenario, one is going to feel rushed. Even with this being a test environment, I felt like getting AD back was something that needed to be quick so we could move onto the more user-facing applications, like Exchange.

My network has two active directory domains, a parent and a child domain in a single forest. The design is no longer appropriate for how things are organized for our company and we’ve been slowly working to migrate servers and services to the root domain. Right now, we are down to the remaining 3 servers in our child domain and one remaining service account. The end is in sight, but I digress.

The scope of our disaster recovery test does not involve restoring that child domain. This is becoming an interesting exercise, because it will force us to address how to get those few services that reside in that domain working in the DR lab. This will also help us when we plan the process for moving those services in production.

Bringing back a domain controller for my root domain went by the book. I could explain away all of the random error messages, as they all were related to this domain controller being unable to replicate to other DCs, as they hadn’t been restored. I had recovered the DC that held the majority of the FSMO roles and sized the others. I started moving onto other tasks, but I couldn’t get past the errors about this domain controller being unable to find a global catalog. All the domain controllers in our infrastructure are global catalogs, including this one, as I hadn’t made a change to the NTDS settings once it was restored.

So I took the “tickle it” approach and unchecked/rechecked the Global Catalog option. The newly restored DC successfully relinquished its GC role and then refused to complete the process to regain the role again. It was determined to verify this status with the other domain controllers it knew about, but couldn’t contact.

I knew for this exercise, I wasn’t bringing back any other domain controllers. And in reality, even if I was going to need additional DCs, it was far easier (and less error-prone) to just promote new machines than to bother restoring every DC in our infrastructure from tape. (However I still back up all my domain controllers, just to be prepared.)

To solve the issue, I turned to metadata cleanup. Using NTDSUTIL, I removed the references to the other DC for root domain, the DC for the child domain and finally, the lingering and now orphaned child domain itself. I also had to go into “AD Domains and Trusts” to delete the trust to the child domain, which wasn’t removed when the metadata was deleted. Once all these references were removed, the domain controller successfully was able to assume the global catalog role and I could comfortably move on to restoring our Exchange server.

And I’ve learned that just because I can explain an error, doesn’t mean I can ignore it.

AD Recycle Bin – New in Server 2008 R2

This week I continued with disaster recovery testing in our lab, the first machine restored from tape being one of our domain controllers. While checking over the health of the restored Windows 2003 active directory, I remembered that we are using a third-party tool in production to aid in the recovery of deleted items – Quest’s Active Directory Recovery Manager. To be honest, we haven’t had a reason to use the software since we installed it, which I suppose is a good thing. But it is a stress reliever to know that it’s there for us.

Restoring this product in our test lab isn’t part of the scope of this project, but it does have me looking forward to planning our active directory migration to Server 2008 R2, which includes a new, native “recycle bin” feature for deleted active directory objects. You can find more details about how this feature works in Ned Pyle’s post on the Ask the Directory Services Team blog, The AD Recycle Bin: Understanding, Implementing, Best Practices, and Troubleshooting.

While the native feature doesn’t have the ease of a GUI and requires your entire forest to be at the 2008 R2 functional level, it’s certainly worth becoming familiar with. Once I’m done with all this disaster testing, you can be sure this feature will on the top of my list to test out when I’m planning that upgrade.

Check Out TechNet Events

Today I enjoyed a morning at the Microsoft office in SF attending an event in the current series of TechNet Events. Through the months of September and October, the TechNet Events team is traveling around the US providing tips, solutions and discussion about using Windows 7 and Server 2008 R2.

Today’s presentation was given by Chris Henley, who led some lively and informative discussions on three topics – Tools for migration from Windows XP to Windows 7, Securing Windows 7 in a Server 2008 R2 Environment (with Bitlocker, NAP and Direct Access) and new features in Directory Services.

I was excited to see specific information on Active Directory. If you missed the blogs about Active Directory Administrative Center back in January like I did, you’ll like some of the new features in this 2008 R2 tool, including the ability to connect to multiple domains and improved navigation views.

If there isn’t an event near you this time around, check back after the holidays when they’ll head out again for another series.