Applying inherit rights (inheritable) for permissions to a large number of Active Directory objects

I was recently involved in a project to help secure a School’s Active Directory environment.  After sitting down and planning what we wanted to achieve in terms of account security we went to work.  After setting up all of the IT based security groups and assigning and delegating the appropriate rights and permissions we found that somethings wern’t working as they should.  The permissions gave us the rights to do what we needed on the Student user objects but not on the staff.  After taking a quick look we found that the majority of Staff didn’t have their inherit permissions from parent ticked, which prevented the delegation from flowing through to these user objects.

Looking at PowerShell there are Get-ADUser and Set-ADUser which allow us to get and set certain properties on user objects but still didn’t allow us to set inherit rights on objects.  I then happened to stumble upon a management pack of PowerShell scripts from Quest Software which are available from this link. The pack contains some useful scripts which extend on the original Microsoft provided scripts.  The pack also contains a PowerShell cmdlet dealing specifically with Object Security which is what we are after. So I went ahead and downloaded the 64 bit version to one of the domain controllers (after testing it out myself) and worked out we needed to filter for users who didn’t have the inherit permissions enable. The following is a snippet which will list all of the users in your AD environment with inherit permissions disabled (watch the word wrap):

Get-QADUser -SizeLimit 0 |
Where-Object {$_.DirectoryEntry.psbase.ObjectSecurity.AreAccessRulesProtected}

If you are after a particular Organizational Unit simply replace -SizeLimit 0 with -SearchRoot ‘Distinguished name of OU’.

Now we are able to find the users, but what about setting the inherit right.  Using the ObjectSecurity cmdlet we can now set the Inheritance flag. So the following is the complete command to run on l (again, watch the word wrap):

get-QADUser -SearchRoot 'Distinguished Name of OU' |
Where-Object {$_.DirectoryEntry.PSBase.ObjectSecurity.AreAccessRulesProtected} |
Set-QADObjectSecurity -UnLockInheritance

After running that cmdlet on the offending User Objects, we were then able to successfully do what the security groups allowed us to do. I still need to go back and check out what else the pack from Quest can do as I looked quite interesting so I will be sure to blog about my findings.

Exchange Management Console not setting Permissions for Receive Connectors, fixing 5.7.1 Client was not authenticated issues with inbound e-mails

I was recently helping out on a migration from Exchange 2003 to Exchange 2010. The organisation was moving from two servers, a front end and back end server to four with two Mailbox servers running in a DAG (Database Availability Group) configuration and two Client Access Servers in an array and along with Hub Transport roles on the two CAS Servers.  They had been running their Exchange 2003 server for a while with routing connectors setup so the 2003 machine was handling all of the external mail and it had finally come time to decommission the old server and tell the firewall to push e-mail to the two new Hub Transport servers.

By default, Exchange 2010 sets up two connectors, one Client SERVERNAME and Default SERVERNAME with the default connector handling your external e-mail. For external servers to connect to the connector we need to enable anonymous authentication for the connector. So we went ahead, opened up the connector and ensured the Anonymous was ticked, clicked apply for good measure and then Ok. After using telnet to remotely connect to the e-mail server (via 3G) we recieved a Client not autenticated message after giving it the mail from command. So it was clear to me that the EMC (Exchange Management Console) wasn’t saving the authentication settings. So we resorted to powershell to configure the connector. I opened up the Exchange Management Shell and issued the following command

Get-ReceiveConnector -identity "Default SERVERNAME" | list

The list contained everything that was ticked bar the Anonymous that we had just enabled. So we decided to run a Set command to update the permissions list for the Default connector. I issued the following command with SERVERNAME being the name of the Exchange Server hosting the connector

Set-ReceiveConnector "Default SERVERNAME" -PermissionGroups:
"ExchangeUsers, ExchangeServers, ExchangeLegacyServers, AnonymousUsers"

We then ran the Get command again and the Anonymous entry was listed. So after a few minutes we used 3G and telnet to connect back into the e-mail server and after getting passed the mail from command we were relieved that the issued was resolved and that the servers could now accept incomming external e-mail. Microsoft have a good article up on Tech Net regarding Exchange 2010 and the Receive connectors (as well as the defaults Exchange setup creates) at the following article Understanding Receive Connectors.

Note that you won’t recieve this issue if your transport server is an Edge transport server as Exchange setup automatically configures the connectors for external connections.

Using the Google Maps API with PHP to display a Map and place a Marker on the map using Geocoding

So I was recently developing a new feature for a client’s website where we wanted to display an interactive Google Map with the address of a particular item (in this case a customer) along with a marker to show where it was on the map. So I set about looking around on the Internet for some information, not coming up with a lot apart from Google’s own API Documentation. So I decided to piece a few things together with what I already knew and make my own little script to do what I wanted.

The following code is quite crude, but should get you started on the right foot. I will eventually update this stuff to use the v3 of the API but since Google is still supporting v2, why bother. So first off is building the address we are going to use to query Google, which is the following code:

define("MAPS_HOST", "maps.google.com");
define("KEY", "abcdefg"); // Place your API Key here...
// Get our address (from a database query or POST
$address = "100+Flinders+Street+Melbourne+VIC+3000";
// Build our URL from the above...
$base_url = "http://" . MAPS_HOST . "/maps/geo?q=" . $address . "&output=csv" . "&key=" . KEY;

So we now have the URL to query google and you can actually paste that into your web browser and you will get an output similar to:

200,8,-37.8161165,144.9714606

So now lets move on to getting PHP to actually process the URL and get us some location data. I decided to use CURL simply because it is quick and simple (was rushing to get this together). So the following code basically initialises CURL and will retrieve the data from URL and save into our $csvPosition.

// Initalise CURL
$c = curl_init();
// Get the URL and save the Data
curl_setopt($c, CURLOPT_URL, $base_url);
curl_setopt($c, CURLOPT_RETURNTRANSFER, 1);
// Save location data
$csvPosition = trim(curl_exec($c));
// Close the connection
curl_close($c);

You will probably want to add some error checking here just in-case your server can’t reach the URL.

The final piece of PHP will spilt the result, which was requested and returned in CSV format into a variables as defined by the following line of code:

// Split pieces of data by the comma that separates them
list($httpcode,$elev, $lat, $long) = split(",", $csvPosition);

Now that we have performed the Geocoding of the address and have it split up and ready, we can create our map.  So firstly we need to include the Google Maps API with the following HTML:

<script type="text/javascript"
    src="https://maps.googleapis.com/maps/api/js?sensor=false">
</script>

With the API included we can now build the map, with PHP giving us our Latitude and Longitude values by querying the Geocoding Web Service previously with CURL which we have done previously. So we initalise the API, build our Map with some basic settings like Zoom and Drop in a marker with the Latitude and Longitude provided by the Geocoding.

  function initialize() {
    var latlng = new google.maps.LatLng(<?php echo $lat; ?>, <?php echo $long; ?>);
    var addressMarker = new google.maps.LatLng(<?php echo $lat; ?>, <?php echo $long; ?>);
    var myOptions = {
      zoom: 16,
      center: latlng,
      mapTypeId: google.maps.MapTypeId.ROADMAP
    };
    var map = new google.maps.Map(document.getElementById("map_canvas"),
        myOptions);

	marker = new google.maps.Marker({ map:map, position: addressMarker });
  }

We are placing the Latitude and Longitude we retrieved with CURL from the Google Maps Geocoding service into the JavaScript function above with the variables $lat and $long. Now we have those functions in place, we can create a div in the body of the page to display our map. In this case called map_canvas. I have already defined a style to limit the width and height to 512 pixels. So we want to insert a body onload event with the following line of code, so insert this into your body tag.

onload="initialize()"

and as well we need to insert our div map_canvas so the previously defined JavaScript can actually draw a map. So create a Div layer with id map_canvas.

And that is about it. You should now be able to see a map with a red marker in the center with the address we specified being marked.  It’s location being determined by Google’s Geocoding and Google Maps API and sorted with the help of PHP. This could easily be improved by making the marker click-able and storing data on it. Adding multiple markers, animations and customising the map view itself.

To view a demo of the completed script, click here.

Update: You can also Download the complete sample by clicking here.

Thoroughly cleaning up a WSUS server

I was recently tasked with performing a clean-up of some of our servers, removing old files/software installations as well as a clean-up of our WSUS server.  After a quick look I could see that our previous administrator had set it to download Driver updates as well, which was taking up quite a large amount of space and something that we wern’t really looking at using (no driver updates were approved).

So I ran the WSUS clean-up wizard, which removed some old computers, but the driver updates remained. I wanted them gone.  So I quickly made sure that driver updates category wasn’t selected and ran the following PowerShell script on the WSUS machine.

[reflection.assembly]::LoadWithPartialName("Microsoft.UpdateServices.Administration") | out-null
$wsus = [Microsoft.UpdateServices.Administration.AdminProxy]::GetUpdateServer();
$cleanupScope = new-object Microsoft.UpdateServices.Administration.CleanupScope;
$cleanupScope.DeclineSupersededUpdates = $true
$cleanupScope.DeclineExpiredUpdates         = $true
$cleanupScope.CleanupObsoleteUpdates     = $true
$cleanupScope.CompressUpdates                  = $true
#$cleanupScope.CleanupObsoleteComputers = $true
$cleanupScope.CleanupUnneededContentFiles = $true
$cleanupManager = $wsus.GetCleanupManager();
$cleanupManager.PerformCleanup($cleanupScope);

This script took about an hour to run but worked like a charm. It basically cleaned up the whole WSUS database, removing old computers, obsolete and unneeded updates (including the drivers I no longer wanted) as well as removing the associated update files which cleaned up a lot of space. The script above is modified so as not to remove computers from the database but that can be simply uncommented and included in the script.

Let me know in the comments if you found this useful.

Fixing HTTP Error 500.21 – Internal Server Error Handler “WebServiceHandlerFactory-Integrated” has a bad module “ManagedPipelineHandler” in its module list on IIS.

I was recently developing some .net web applications at work and finally took the plunge of setting up a server to host them after I was happy that we had reached the point where they were usable applications.  So after setting up Server 2008 R2, I went ahead and enabled IIS with asp.net configuration enabled along with a host of other modules enabled to support what we were after. Gave the server a reboot and everything was up and ready to go.

Moved over to my development machine and instructed Visual Studio to publish an application to this newly setup web server, everything up until this point had worked perfectly, the web app published without any errors. So I opened up Internet explorer and low and behold error, page cannot be displayed. So I’m like maybe it’s a permissions thing, remoted into the web server and viewed it as a localhost and got Error 500.21 – Internal Server Error  Handler “WebServiceHandlerFactory-Integrated” has a bad module “ManagedPipelineHandler”.  I had gone ahead and configured an application pool for the web apps we were going to use, and everything seemed OK.  I then remembered something, in the good old IIS 6 days you had to run a .net registration program to notify IIS you had it installed and register the handlers for .net, but thinking to myself this is now 2008 R2, come on, it couldn’t be that.

I opened up an elevated command prompt window and navigated to C:\Windows\Microsoft.NET\Framework\v4.0.30319 and ran aspnet_regiis.exe -i causing asp.net to register itself with IIS and enable the handlers required to run .net pages.

After running this command I rebooted IIS and bang the web application started working.  An easy fix but a pity that it couldn’t have run automatically when I insist on installing asp.net under the IIS configuration.

 

Assigning resources via logon script based on computer names.

We’ve recently been having an issue where printers being deployed via group policy haven’t been deploying, or are deploying but not being set to default.  So after some investigation, the easiest thing to do would be to write a Visual basic script to ease the deployment of printers throughout our environment. Luckily for us we have naming conventions and machines are usually called 2011uname or 2014uname, based on a student’s final year.

You can use this script for all sorts of things, from allocating printers and mapping network drives. To increase or decrease the amount of characters that the script looks at simply modify the strLength (Define String Check Length) variable at the beginning of the script.

'Actions based on Computer name for logon scripts
'Define String Check Length
strLength = "4"

'Get the computer name
Set WSHNetwork = CreateObject("WScript.Network")
strComputer = WSHNetwork.ComputerName

'Select year level by ending year
Select Case Left(strComputer,strLength)
    Case "2011"
        msgbox "year 12"
    Case "2012"
        msgbox "year 11"
    Case "2013"
        msgbox "year 10"
End Select

The script will print out a message box, but within the case you can specify what ever action you want. Hope that helps someone out.

Server Side E-Mail Validation with PHP

Validation is one of the most important things which can be done on a public-facing website. It prevents users from entering required information incorrectly or even worse attempting to damage your site via some form of script or SQL Injection attack.

Recently I was developing a website for a Real Estate firm, as a legal requirement they need to have valid e-mail addresses to link up to views of listings. Client Side e-mail validation is fairly simple, but what if the user is malicious and removes the JavaScript or just has JavaScript disabled. So I added both client and server side e-mail validation.

One important thing to remember is to always sanitise your inputs whenever something is going into a database, but that’s for another article. There are several ways to tackle e-mail validation, I let the Client Side handle the format and let the server do some heavier work. Firstly, it required making the function. I’ve called it checkEmail, we also need to pass it an e-mail address via $email.

function checkEmail($email)
{

We then move on to setting things up, creating our error variable and getting our e-mail argument and ensuring it is safe.

    $email_error = false;
    $Email = htmlspecialchars(stripslashes(strip_tags(trim($email))));

After getting the address from our function call, perform a simple test to see if there is anything and if there is begin validating. If not pass our error and tell the user.

	if ($Email == '') { $email_error = true; }
	elseif (!preg_match('^([a-zA-Z0-9._-])+@([a-zA-Z0-9._-])+\.([a-zA-Z0-9._-])([a-zA-Z0-9._-])^', $Email)) { $email_error = true; }

There is a piece of nifty REGEX which simply validates if we have a valid e-mail address in a specific format of an e-mail address of [email protected] with only alphanumeric and – . _ being accepted. If our REGEX passes we then begin to do some checking, which is where the server side validation comes in. We explode our e-mail address to extract the domain and using PHP’s inbuilt checkdnsrr function we can perform an MX Lookup of the supplied domain.

	else {
	list($Email, $domain) = explode('@', $Email, 2);
		if (! checkdnsrr($domain, 'MX')) { $email_error = true; }
		else {
		$array = array($Email, $domain);
		$Email = implode('@', $array);
		}
	}
	if ($email_error) { return false; } else{return true;}
}

If the checks all pass then we return FALSE and allow the form to submit with code embedded on the calling page. If we do get an error then we return TRUE with code again embedded on the calling page to notify the user.

And there you have it,  server side e-mail validation. Of course you could improve on what I’ve done by actually checking for an alias on the particular domain or by sending out an e-mail validation to the address before the user can continue, but that is all beyond the scope of this function and article. If you do end up using this function, I’d love to hear where you are using it, so feel free to let me know.