Law Thirty-Six

You gotta go with what works

Woman in a Christmas hat surrounded by question marks

December 19, 2013
by Jon

Well prepared for next Christmas

There are two boxes of Christmas cards on my desk. There have been two boxes of cards there for a week or so, to be honest, part of the clutter that reduces the working area on my nice large desk to something not much wider than my keyboard. The cards have been nagging me for the past ten days, niggling at me every time I think I can stop and relax for a moment.

For those of you thinking but I’ve already had a Christmas card from you, I’m not talking about my friends and family cards: I’m not allowed to do those, or at least that’s the story I tell people. The truth is that Becky takes care of them, just needing to be fed the occasional mug of mulled wine in encouragement. She comes home in early December with several packs of different card designs, then works through her randomly sorted mental list of recipients. She consults the oracle of addresses for those members of my family that seem to move house on an annual basis, and makes sure that recent additions to growing families are properly accounted for.

She even gets all this done in time to send the cards out using second class stamps.

I don’t think I’ve ever managed to send Christmas cards second class.

It’s a crazy, haphazard scheme. Much pondering is involved as she chooses just the right card from the hierarchy of choices for each recipient. Typically, she is a couple of cards short and there is a quick dash to the shops for the last few she needs. One year, I suggested getting label sheets for the printer and creating a  Word mail merge for all the addresses, but this resulted in a look of scorn I’d not seen since the time I floated the idea of opening  presents before lunch on Christmas Day.

I’m left to fend for myself for cards to colleagues at work, and I take this responsibility seriously. I make a list, grouped by office. I work out exactly how many I’ll need, allowing an extra 15% for mistakes and for anyone who sends me a card when I hadn’t sent one to them. I buy cards of more or less the same design so that I can just pick the next one from the pile regardless of recipient. All carefully planned.

Except my cards are still in the boxes next to me, niggling me. And Becky’s are in the post, second class stamp on the outside, or even sat in other people’s houses on shelves, mantlepieces or in those weird, tree-shaped card holders.

Every year it’s the same. Some time in early November, I decide that this is the year I’m going to be organised. I’m going to get my cards before December, write them all in the first few days of the month, and then stun all around when mine is one of the first cards they receive. Except it doesn’t work out that way. Ever.

This year, it was beginning to get me down until I changed my perspective on the matter this morning. It’s December 19th, and even Champion the Wonder Postman isn’t going to get these cards to our offices across the UK until Monday. Most of the people I want to send cards to will have broken up for Christmas by then. A card that arrives after a person has finished for Christmas is just as bad as one that arrives between Christmas and New Year, left to sit unopened until January, never likely to see a shelf, mantlepiece or weird, tree-shaped card holder. It may see daylight for just a few fleeting moments before ending up in the recycling bin.

So instead of seeing myself as being late with this year’s cards, I choose instead to view myself as being well prepared for next Christmas. I may even write the cards tonight, just to get even further ahead of the competition.

And if you work with me and you’re wondering why you’ve not had a card from me this year, well I guess you now know.

Two boxes of Christmas cards

Next year’s Christmas cards

Homer Simpson

December 18, 2013
by Jon

A world full of Homer Simpsons

As Christmas and the end of 2013 approaches, the media is starting to fill up with lists of things that happened in 2013 – news events, celebrity deaths, top grossing films and the like.

Google has produced a list of the top search terms of 2013.

The top three Google search terms are Facebook, YouTube, then Google.


Number three on the list is distinctly worrying – these people went to Google and searched for ‘google’. And every time I think about that fact, I think about this:

SDP logo

December 13, 2013
by Jon

Reporting on software installations in ServiceDesk Plus

If you have workstation asset scanning in place within ServiceDesk Plus, then you have a wealth of information available to you about what software has been installed where across your organisation. We install have the AssetExplorer agent installed on all our PCs and laptops, set to scan on startup. In a global organisation with network links of varying capacity, we found this the best way of making sure that we got regular information back from workstations. (A network or domain scan would have to account for time zone differences, and I was concerned about the server reaching out across the global WAN for machines that weren’t actually there).

There are options within SDP to discard old scan history, but we have left those switched off, meaning that we can show a full history of the machine from when it was first seen on our network.

Today, a series of example queries around installed software. We’ll use Internet Explorer 9 as our example, but clearly you could use any software of your choosing. Do make sure you spell the software name exactly as it appears in SDP. On the Assets tab, choose Scanned Software from the sidebar on the left.

These queries are designed to be run against Microsoft SQL Server in SQL Management Studio. They’re use the date functions I described in an earlier post – you’ll need to add those to your installation before trying these. However, as these queries all execute as a single statement, they could easily be modified to run as a Custom Query on the SDP Reports tab.

First of all, a simple one: who has this software installed.

SELECT workstationname as [Workstation],
	   loggeduser as [Logged on user],
	   dbo.my_BIGINT_TO_DATE(ah.audittime) AS [Last Scanned],
	   u.first_name as [Owner],
	   sdu.jobtitle as [Job title],
	   dd.deptname as [Department], as [Site]
FROM systeminfo AS si
left outer join resourceowner AS ro ON si.workstationid = ro.resourceid
left outer join aaauser AS u ON ro.userid = u.user_id
left outer join sduser AS sdu ON u.user_id = sdu.userid
left outer join userdepartment AS ud ON sdu.userid = ud.userid
left outer join departmentdefinition AS dd ON ud.deptid = dd.deptid
left outer join sdorganization AS sdo ON sdo.org_id = dd.siteid
left outer join audithistory AS ah ON ah.workstationid = si.workstationid
ah.auditid = (SELECT MAX(ah2.auditid)
			  FROM audithistory AS ah2
			  WHERE ah2.workstationid = si.workstationid)
AND si.workstationid IN (SELECT workstationid 
					     FROM softwareinfo AS soi
						 INNER JOIN softwarelist AS sol ON soi.softwareid = sol.softwareid
						 sol.softwarename = 'Windows Internet Explorer 9')
ORDER BY ah.audittime DESC

I tend to include the scan time in here to indicate how up-to-date the information is. Also, in this example, I’ve assumed that you’re assigning workstations to Requesters and show that information also.

You can modify the query to show machines that do NOT have the software installed by editing line 21 to say ‘NOT IN’ instead of just ‘IN’.

If you’re trying to track the spread of a piece of software across your organisation, this script will show you the individual installs and uninstalls. This is great for tracking the progress of an automated software deployment. Given that we scan workstations on start up, this works especially well for us since many pieces of software ask for a reboot after installing.

SELECT dbo.my_BIGINT_TO_DATE(ah.audittime) AS [Operation date],
	   si.loggeduser AS [Logged ON user],
	   u.first_name AS [Name],
	   sdu.jobtitle AS [Job title],
	   dd.deptname AS [Department], AS [Site],
	   si.workstationname AS [Workstation],
	   CASE ao.operationstring WHEN 'Delete' THEN 'Uninstalled' ELSE 'Installed' END AS [Opertation]
FROM swaudithistory AS swah
INNER JOIN softwarelist AS sl ON swah.softwareid = sl.softwareid
INNER JOIN audithistory AS ah ON swah.auditid = ah.auditid
INNER JOIN systeminfo AS si ON ah.workstationid = si.workstationid
INNER JOIN auditoperation AS ao ON swah.operation = ao.operation
LEFT OUTER JOIN resourceowner AS ro ON si.workstationid = ro.resourceid
LEFT OUTER JOIN sduser AS sdu ON ro.userid = sdu.userid
LEFT OUTER JOIN aaauser AS u ON ro.userid = u.user_id
LEFT OUTER JOIN aaalogin AS l ON l.user_id = ro.userid
LEFT OUTER JOIN departmentdefinition AS dd ON ro.deptid = dd.deptid
LEFT OUTER JOIN sdorganization AS sdo ON sdo.org_id = dd.siteid
sl.softwarename = 'Windows Internet Explorer 9'
ORDER BY ah.audittime ASC

In this case, I’ve shown the user logged on to the PC, their department and site rather than the assigned owner within SDP. This was useful for my needs because it indicates who was responsible for installing/uninstalling the software. Well, sort of. Since there’s a lag between when the software is installed and when the machine is scanned, it may not be the right user, but it’s the best guess we’ve got.

Finally, a query that shows the different builds/versions of the software deployed. Just a summary in this case, although the detail is available.

select soi.fileversion AS [Version],
       COUNT(*) AS [Installs]
FROM softwarelist AS sl
INNER JOIN softwareinfo AS soi ON soi.softwareid = sl.softwareid
WHERE softwarename = 'Windows Internet Explorer 9'
GROUP BY soi.fileversion
Man with head in the sand

December 8, 2013
by Jon

The missing step in Problem Management

In our organisation, we loosely follow the ITIL principles for IT service management. The Problem Management process is all about trying to get to the bottom of those niggles that keep happening – the ones where turning it off an on again will work, but you know it’s going to happen again tomorrow. And the next day. And the day after.

Like many of the ITIL processes, Problem Management can look a little different in reality from the way it does in the books, and so I’ve annotated the top half of the process diagram to show what really happens:

ITIL Problem Management process diagram with Prioritization step replaced with Denial

Problem Management with a dose of reality

I think it’s all too easy for us to turn the Prioritization stage into a Denial stage and to avoid getting to grips with the Problem for as long as possible. Over time, we develop finely-tuned techniques for this.

Here’s five arguments people employ for denying that a Problem exists.

It’s an isolated case

The easiest thing to do is to prioritise it away based on prevalence. If only one person is reporting it, it’s probably something specific to their PC. Issue them with a new computer or re-image their existing computer, and it will probably go away. That’s quite a drastic step, and issuing the user with a new computer can create a whole new set of problems for them. Which makes it all the more frustrating when the problem shows up on this new computer as well. Every Problem starts with a single report, and by denying problems because they’re not happening to many people (yet), what we also do is to prevent ourselves from nipping these problems in the bud.


Another avoidance approach is to blame the user: Problem Between Keyboard And Chair. They’re using it wrong – if they didn’t double-click where they should be single-clicking, the application wouldn’t crash and they wouldn’t lose all their work. By classifying the Problem as a user issue, the IT professional has at a stroke absolved themselves of all responsibility. The software/hardware is fine, it’s just that the user keeps doing unexpected things to it. Even if this really is the cause, it doesn’t negate the existence of a Problem. It may mean that the answer is not a technical fix, it’s communication or training or documentation, but it’s still a Problem that needs to be addressed.

Labeling a Problem as a user issue can be a tough sell with the end user population. Generally, it’s considered bad form to tell users that they’re stupid and that if only they were using it right, they wouldn’t be having the problem. Our company found itself in a delicate situation around this when investigating performance problems with our document management system (“DMS”) of the time. Poorly-constructed full text searches were bringing the system to a grinding halt, not just for searching but across all DMS operations. Users were searching for documents that contained words like “contract” or “letter” or “agreements”, and the architecture of the DMS was such that once you asked it to do that search, it would just stop and populate a temporary table with the million or so results that matched the user’s query, before checking them one by one to see whether the user had permission to view the document. Even if the user crashed out of their DMS client, the server would continue quite merrily until someone spotted the long-running transaction on the SQL Server and killed it.

From one perspective, the Problem was the way in which the user was querying the system – and we considered for a while naming and shaming people who made these searches. But of course the real problem was that the DMS architecture put processing search requests on the same server as other DMS activities, and with the way in which access rights were related to documents.

It’s perception

I hate performance problems. Obviously, I hate suffering from them, but I also hate it when we need to troubleshoot them. Typically, there’s no benchmark data to fall back on. Internet browsing is slow, the user says. But what constitutes fast enough? “Look,” says the user. “It’s taking too long when I click this link for the page to load.” Well, maybe…

Similarly, we had a problem when we rolled out a new standard desktop based on Windows 7. This was part of a hardware refresh as well, so users received a new PC or laptop to replace the doorstops we’d been using up until then. Around three months after the initial rollout, the complaints started coming in to say that the machines were taking  too long to start up  in the mornings. Were they? Or was it just that the honeymoon period post rollout was over. Memories of the old machines, where you could make a cup of coffee, drink it and wash the cup afterwards whilst waiting for the Ctrl + Alt + Del prompt to appear, had faded. This particular Problem became stuck in the Denial stage for several weeks before someone dug out the measurements that had been taken during the design stage, showing that start up times had indeed grown since the rollout.

Can’t reproduce

We’ve been dealing with an issue with Internet browsing performance at work at the moment that’s been stuck at the denial stage for quite a while. A call would come in from a user saying that browsing was slow, so we would swoop in to try to quantify it. We’d install HTTP Watch or Wireshark and start capturing browsing activity, do Internet speed tests from one of the benchmarking websites. And of course, these tests would not show an issue. We’d analyse the results, which would show websites loading almost instantaneously, and everyone would agree that there was no issue. Except the users. Because this was one of those most hated of Problems – the intermittent issue. Either the precise circumstances causing the problem are not understood, or they’re based on a complex coincidence of events, or for some other reason appear to be completely random. By the time we come to investigate, there’s nothing to see. Whoever is investigating types “Can’t reproduce” at the bottom of the Problem record, and closes it.

Old problem. Won’t fix

I actually read this in an internal Microsoft bug report. You could follow the history of it back through two releases of the product, with some poor Microsofty trying to get someone to take a particular Problem seriously enough that they might try to fix it. The response was that this particular feature had been broken for so long now, it wasn’t worth fixing. The other way of looking at it, from the customer’s perspective, was that it had been broken for at least two releases of the software now and it was about time Microsoft fixed it! There can be a complacency within an organisation that a particular Problem, though troublesome, is never going to get resolved. Perhaps someone looked at it once and couldn’t see a way round. But fresh eyes, different surrounding circumstances, and new technologies can sometimes mean that there’s a solution there after all, if you can be bothered to look for it.

SDP logo

December 1, 2013
by Jon

Running multiple instances of ServiceDesk Plus on a server

In our organisation, we now have four instances of ServiceDesk Plus in use in different departments of the company. The one used by the IT department has the heaviest usage, and we run it on a dedicated VM. The other three have lighter usage, and so we have them set up on a single Windows Server 2008 VM. Having to run each instance on its own server would be expensive in terms of hardware and server licences. Of course, SDP can be run on LINUX to reduce licence costs, but in our organisation that was not an option.

Ideally, you would be installing these multiple instances of SDP at the same time as each other. If you decide to add a second instance to an existing box at a later date, this is still possible and you can pick up the instructions from “Tweaking the first installation”. It goes without saying that you need to size your server for these multiple instances, including making sure that you have sufficient disk space on the various partitions.

The instructions assume Windows Server and MS SQL Server but should outline an approach you can use for other operating systems and database.

The first installation

  1. Let’s do a little bit of planning first of all:
    1. Decide what ports each instance will run on. If port 80 is available, go ahead and use that for one instance. 8080 is the default offered when you install SDP, but I’ve tended to use 81, 82, 83 etc – just stay away from any well known ports that are in use on your server.
    2. Create a SQL login on the data server. It will need to have the appropriate rights to be able to create a database at least while you’re doing the installations, although you can scale that back later. I’m assuming the most difficult configuration, where you’ll be hosting multiple SDP databases on a single SQL Server. Unless you have security reasons not to, there’s no issue with sharing a single SQL login for the multiple instances.
  2. On the Windows server, run the installer, accepting the licence agreement and choosing your SDP edition. When you get asked for a folder to install the application into, give it a name that’s unique to this instance. So, rather than D:\ManageEngine\ServiceDesk, perhaps D:\ManageEngine\ServiceDesk_Alpha.
  3. Set the web server port to whatever you chose for this instance (let’s assume that the Alpha team got Port 80).
  4. Specify your SQL Server connection info, and notice that you cannot change the database name. Don’t worry about that for now.
  5. At the end of the installation process, allow SDP to start and then browse to the web application to make sure that it has started properly.
  6. If you have any patches to install, then go through that process now, testing at the end to make sure that you have a working instance of SDP.
  7. Stop the service from Administrative Tools | Services

Tweaking the first installation

What we’re going to do here is to rename the SQL database to something more specific to this instance and then modify the way the ManageEngine service appears in Administrative Tools | Services.

  1. On the SQL Server, rename the SQL database to something that indicates the instance it’s running, for example servicedesk_alpha. You can do this from SQL Server Management Studio by right-clicking the database and choosing Rename or by executing the command: ALTER DATABASE servicedesk MODIFY NAME=servicedesk_alpha
  2. On the SDP server, open a command prompt and navigate to D:\ManageEngine\ServiceDesk_Alpha\bin (assuming you used the installation folder above).
  3. Run changedbserver.bat and change the name of the database to your new name, for example servicedesk_alpha. Click Test then click Save. A message will appear to warn that the database already exists, then another saying that the change was successful.
  4. From Administrative Tools | Services, start SDP again and verify that everything is up and running.
  5. Run regedit.exe. Usual warnings apply about making changes directly to the registry…
  6. Navigate to HKLM\System\CurrentControlSet\Services\ServiceDesk. Set the following:
    1. DisplayName to something like “ManageEngine ServiceDesk Plus (Alpha)”
    2. Description, if you wish, to something that identifies the instance of SDP this service applies to.
  7. The changes to the way the service is displayed won’t take effect until you restart Windows – do that now if you wish, but as it’s a cosmetic change you can leave it until later.

The second installation

You cannot simply run the installer to create your second installation as it will try to uninstall the first instance. Instead, we’re going to use some Copy and Paste.

If your first installation is brand new and if your second installation will be the same edition as the first, you can just take a copy of the D:\ManageEngine\ServiceDesk_Alpha folder, then paste and rename it to create D:\ManageEngine\ServiceDesk_Beta (or whatever you want to call your second instance).

If your first installation is not brand new, you are likely to have files within the application folder (such as file attachments and inline images) that would need to be cleared down. Rather than do that, the approach where you don’t have a clean source to copy from is to install SDP on a spare machine and copy from there. That could be a Windows workstation and use a SQL Express database if that was what you have available. If you want to use the 64-bit version of SDP, then you will need another 64-bit machine for this temporary installation. It will help, but is not essential, if you specify the web port you’ve decided to use during the installation process. When you’re done, copy the \ServiceDesk folder to your destination server and rename it to create D:\ManageEngine\ServiceDesk_Beta (or whatever).

Don’t start your second instance yet!


  1. Open a command prompt and navigate to D:\ManageEngine\ServiceDesk_Beta\bin — make sure you’re in the second instance’s folder. Run the command changewebserverport 81 http, or where 81 is the port number you’ve chosen for this new instance.
  2. If you created your second installation by running the installer, and if you created a database for it on the live SQL Server, then follow the instructions from earlier and rename this database as something specific to this instance, such as servicedesk_beta.
  3. Back in D:\ManageEngine\ServiceDesk_Beta\bin, run changedbserver.bat. Set the database name to servicedesk_beta and make sure the SQL login and password are correct. Click Test then click Save. You may get warned that there is already a database with this name, depending on the route you took to get here.
  4. In the same folder, run the command run.bat. This will launch SDP and initialise the database. Allow time for the server to initialise then check that you can browse to the web application on http://servername:81.
  5. The next step is to set up this second instance of SDP as a Windows service. For Windows Server 2003, you’ll need  Windows Resource Kit tools installed. You will also need a different user account for each instance of SDP you’re running. A local user account should be fine – create one now from Administrative Tools | Computer Management.
  6. From a command prompt, running as an administrator, run instsrv ServiceDesk2 “D:\ManageEngine\ServiceDesk_Beta\bin\wrapper.exe”. You should be told that the service is installed and ask you to modify its settings. Don’t try starting it yet! There are a few changes to make before it will work.
  7. Open regedit.exe and navigate to HKLM\System\CurrentControlSet\Services\ServiceDesk2. Set the following:
    1. Append the ImagePath with the following:  -s D:\ManageEngine\ServiceDesk_Beta\server\default\conf\wrapper.conf
    2. Set DisplayName to “ManageEngine ServiceDesk Plus (Beta)”
    3. If you wish, set the Description to something that indicates which instance this is.
  8. In Administrative Tools | Services, locate the new service. It will probably still be called ServiceDesk2. Open the Properties of the service and on the Log On tab, specify the account you created earlier. Click OK.
  9. Start the service and make sure that SDP comes up as expected.
  10. Just to check everything, restart the Windows Server but do not log on as a user. From a remote machine, make sure that you can browse to the two SDP instances and log on to each.

One thing to note is that if you cannot be logged on to both instances using a single browser instance due to the way that the login process uses cookies.

Roof of the TARDIS set

November 30, 2013
by Jon

Studio 4 on Porth Teigr Way

There is no point trying to play it cool when you’re stood on the set of the TARDIS. If you’ve got yourself there, then you have to give up on any pretence that you’re just a casual fan of Doctor Who. You’re just a 21 foot scarf and a floppy hat away from whatever label you want to apply to Doctor Who geeks. Well, actually, a bow tie and a fez away these days, and I have instantly shown my age by referring back to a time of jelly babies and toothy grins.

Last Saturday was 23rd November 2013, and you would really have been doing well to miss the fact that it was fifty years to the day since the first episode of Doctor Who was transmitted. We had already booked a trip to Cardiff to go to the Doctor Who exhibition as a birthday treat for Number One Son and his friend when an email popped up offering tours of the TARDIS set as well. Deciding to add this onto the weekend’s itinerary didn’t take long.

When I was young and living in Cardiff, Doctor Who was made in London and most of the filming was done either in the studios at Television Centre. If they were filming on location, they might stray as far as quarry of the week somewhere within driving distance of London. It was expensive to bring the production further away than that.

Outside BBC TV Centre, London

BBC TV Centre in London, now closed

As far as I’m aware (and others will no doubt correct me), it only came to Wales three times during what we’re supposed to now call “Classic Who”: in the early 70s, for the Jon Pertwee story The Green Death, where a company pumping industrial waste into a mine leads to an infestation of giant, killer maggots; in the early 80s, where North Wales became the wastelands of Gallifrey for the 20th anniversary show The Five Doctors; and in the 90s, when Sylvester McCoy’s Doctor found himself fighting aliens that had decided to invade a 1950s holiday camp in a story called Flight of the Chimerons at the time, and later renamed Delta and the Bannermen.

Scene from Delta and the Bannermen

Scene from Delta and the Bannermen

For the last of these, I managed to find out where filming was taking place on the weekend. I was still in school, so couldn’t go during the week, and I was still too young to drive (just), so my parents drove my friend and me to a farmhouse somewhere outside Cardiff — private property, so we had to wait at the gate until producer John Nathan Turner took pity on us and let us stand somewhere closer to the action. A little too close to the action, as it happens: he stood us behind a bush that turned out to be right in shot and me and my bright red coat got shouted at by the director mid-take.

It was well after I left Cardiff and moved to England that Doctor Who was revived and production moved to Cardiff. Finding out about location filming is now somewhat easier than it used to be. One sighting of a police box in a field and there’ll be photos on Twitter within 10 minutes. There’s a Facebook page devoted to posting the latest news about filming locations, and people have now identified the signposts used by the crew to guide production vehicles to where the filming is. Where filming takes place in a public area like Trafalgar Square or the streets of Cardiff, they bring barriers to keep the crowds back.

Studio filming started off at the curiously named Upper Boat, but last year Doctor Who and Casualty both moved to the BBC’s new studios at Roath Lock in Cardiff Bay. The Torchwood hub is a ten minute walk away. More or less next door to the studio is the new, purpose-built home of the Doctor Who Experience.

Exterior view of BBC Roath Lock studios

BBC Cymru Wales studios at Roath Lock

So, last Saturday, a group of twenty of us met at the Experience and were each handed a somewhat pointless vistor’s badge on a lanyard (or in my case, just a lanyard with no badge!) and walked across to the studio. It’s a curious place. At one end, behind a ridiculously high wall, you can just make out the tops of a street set – building facades with green stained wood backs. The offices have strangely shaped windows. But you walk straight past this, past the entrance with the obligatory Dalek standing guard, to a turnstile at the far end of the building. One by one, you’re swiped through into the studio compound itself and the taken through another security gate to a walkway between the studios on your left and the offices and other functional rooms on the right – canteen, makeup, costumes, etc. They like to keep this area mysterious, so you’re not allowed to take photos until you get into the the studio itself, but if you watched The Five(ish) Doctors Reboot, you’ll have seen this walkway. Wizards v Aliens, Russell T Davies’s current show, was filming in one of the studios. Keen not to be shouted at by another director, I kept quiet as we walked past.

Strax at Roath Lock Studios

Scene from The Five(ish) Doctors Reboot showing Roath Lock studios

The TARDIS set is a permanent set standing inside Studio 4, a large warehouse-type area with black walls and air conditioning. The thing is huge when seen from the outside, standing nine metres tall. It’s built around a steel frame, but from the outside all you see are large plywood panels – helpfully numbered in case someone ever tries to take it apart and re-assemble it. The rest of the studio was curtained off with a security guard posted just at the edge of the curtain, making sure that we didn’t wander out of the designated area. Against the back wall, you could just make out the top of the scaffold they use for wire work, but the rest of the area was probably more or less empty, waiting for filming to start on Series 8 in January.

The outside of the TARDIS set

The TARDIS set is clad mostly in plywood

There was a slightly odd assortment of chairs set in a circle where we sat and waited as we were divided into groups of eight and took turns to walk up the wooden steps to the police box entrance to the TARDIS. The doorway into the TARDIS set is a police box, and the edges of the doorway and the all of the floor are painted green with little crosses for green screen and motion tracking, so that they can film someone walking (The Snowmen) or even riding a motorcycle (The Day of the Doctor) from the outside world right into the TARDIS.

Three children outside the TARDIS doors

Numbers 2 and 3 son plus friend outside the TARDIS

We took photos of the children outside the doors of the TARDIS and Becky asked if I wanted mine taken too. And that’s the point where I just had to admit that there was no point being cool about it: yes, I did want a photo in front of the TARDIS doors.

We walked through into the inside of the set itself. Most sets of its kind have a wall missing where the cameras and crew would be, but this one is complete. If you don’t stare too hard at the two exits with their wooden ramps leading back down to floor level, you’re fully immersed inside the TARDIS. Look up and you can see that the roof of the structure is material, letting light in and reducing some of the echo you might otherwise get. The material can also be pulled back for camera access.

Roof of the TARDIS set

The TARDIS set still has round things that no one understands…

Blue tape cordoning off half the console area and the watchful DWE staff member are the reminders that this is a live TV set – please do not touch, and for goodness sake, don’t break anything. And unlike the sets of the past, it looks and feels solid and stands up well to closer viewing. later in the afternoon, we saw the console used from The Five Doctors onwards in the Exhibition. In the old days, the TARDIS set was assembled as required for each filming block and then broken down and stored – and you could tell. It was obvious that the panels could be lifted out, and you wouldn’t want to lean too heavily on the console. Here, the floor was firm underfoot and the walls, should I have been allowed to lean against them, looked like they’d hold. That’s what the extra money buys. The last few seasons of Classic Who reportedly cost around £100,000 per episode. Ecclestone’s first season apparently cost closer to £1,000,000 per episode and the spend has probably increased since the move to HD.

Becky stood in front of the TARDIS console

Becky stood in front of the TARDIS console

There was some chat about what we were seeing, how long it had been here, how many watts of power it used up in full flight mode, how many miles of wiring went into it, but we were also given ample time to take photos and just gawp around. The very sensible warning as we were about to enter was to watch out for the step down just inside the doors — apparently, people tend to be too busy looking up and around to watch where their feet are.

View from the lower level of the TARDIS set

The TARDIS set stands nine metres tall

But there were another three groups to be ushered through after us, so we slowly worked our way down the steps, looked at the underside of the console (where the Doctor finds his new outfit in The Bells of Saint John) and then out of one of the exits.

Outside, another staff member from DWE was keeping the other groups entertained as they waited their turn. Full credit to all the DWE staff we interacted with that day – they all knew their stuff. I’m not saying that the Experience is staffed by a collection of anoraks by any means, but they certainly weren’t going to get Bakers Tom and Colin mixed up and they knew their New Who very well. We were quizzed while we waited, and I won’t blow their cabaret act by revealing the question, but it was nicely pitched and I’ll admit that I struggled.

Fairly soon, it was time to retrace our steps, back through the turnstile and head back to the Doctor Who Exhibition itself.

So, was it the highlight of the weekend? For me, yes it was. It was an extremely Who-centric day, with the studio visit being followed by the Doctor Who Exhibition and a cinema trip to see Day of the Doctor, but this was something unique. I’m intrigued by TV and film production. I would have happily taken a tour of the Casualty set if it was offered. And this was also my first time inside a TV studio, not having joined my fellow Who fans on their trip to HTV Wales. But obviously, the fact that it was the Doctor Who set made it something special.

When we were checking out of the hotel the following day, the receptionist asked what we’d been up to in Cardiff.

“Are the kids Doctor Who fans then,” he asked.

“Well, sort of, yes.”

“…but you’re a bigger fan than they are, are you?”

Well that’s hardly a fair question, is it? I’ve had a lot more opportunity to be a fan than they have. I’ve watched every episode that’s been shown since 1977 more or less as it went out (okay, I watched Survival episode 4 a day late on VHS, and by the time New Who arrived, I had children and bath time clashed with Who time. But that’s what Series Record is for…) At 16, I was going to Doctor Who groups and conventions. Okay, full disclosure – I ran the local Doctor Who group and I helped organise a convention. I may have put my autograph book away, but it’s there somewhere. And I might not be able to recite the titles of all the Doctor Who stories in order any more, but there was certainly a time when I could.

Oh, who am I kidding? I’ll go order myself that 21 foot scarf.

Straw boater on a punt pole

November 21, 2013
by Jon

Five things I learned in my first job

Looking back over the last twenty years, it’s easy to see the influence of my first ‘proper job’ on my working life since. There was what might appear to some to be a sharp change in the direction of my career as soon as I left university. Two weeks after finishing a postgrad course, I was living in Oxford and working as an IT trainer. It was one of the best times of my life, probably the best job I ever had, and I see that I’m not the only person working who thought so. The people I worked with there have gone on to achieve great things in their respective fields: IT directors, company directors, conference speakers, an Exchange MVP, and a priest among them. That may say something about how we recruited, but I think it speaks to what they gained from their time with the company. Sadly, the company itself ceased to exist a few years after I left, although some of the players of the time have since resurrected the name and it has risen again, providing Microsoft identity and access management solutions.

Oxford Computer Group’s job advert asked ‘Are you reasonably good at everything’. Its interview process required you to work out what part of a train always goes backwards, and to discuss what determines whether an electric shock is lethal. HR professionals may sneer at the techniques we used, but they were effective: it selected staff who had the arrogance (for want of a better word) to stand up at the front of a classroom and purport to know more about a subject than any of the delegates and who were self-sufficient enough to deal with the demands of the role.

By the time I left the company five years later, I had learned a number of extremely valuable lessons about work.

Living on the edge is motivating

One of the company director’s catchphrases was “How hard can it be,” and that summed up the company’s approach to challenges. New recruits were put on a steep learning curve. In my first week at OCG, fresh out of university and never having participated in let alone managed a project in my life, I found myself sat on a Microsoft Project 4 Introduction course. In week 2, over the course of five days, I restructured and re-wrote the two-day Project 3 Intermediate course for version 4. Instruction came from a series of 10 minute conversations with my manager a couple of times a day. On the Friday of week 2, I worked my first all-nighter, alone, while my boss attended an Oxford college ball. By 10am on Saturday, I had seven copies of the manual bound and packed in a bag together with the usual kit for an on-site training course, and my boss flew out to Dallas that day to teach it.

Get a group of OCG trainers together and it’s like listening to extreme sports enthusiasts, bragging about the time when they nearly fell off a cliff face or they cut it a bit fine opening their parachute. The classic story was of the two founding directors teaching a particular course: one would be in the classroom, one was in the room next door writing the next module. Every hour, they would swap round.

My own stories weren’t quite as extreme. My first course was on the client’s premises in Staffordshire and involved me driving up with eight computers and an OHP in the boot of the car, and powering the whole lot from the only floor socket available in the training room. I vividly remember the penny dropping about why you needed an intermediate table in a many-to-many database relationship whilst explaining it on a SuperBase intermediate course. There are also a shocking number of stories about staying up well into the early morning learning a course followed by getting up and driving for two hours to deliver the training. How I avoided ending up dead in a ditch, I don’t know.

If you’d asked me at 2am whether I was enjoying myself there and then, I doubt I would have said yes. But if you’d asked again at the end of the week when the course was over, the evaluations were in, and I was sat in the Rose & Crown with a pint in a glass with a handle, working my way through a half-pint of pistachios… oh yes, it had been a great week.

Mostly, we got away with it. We were set an impossible goal, and we would rise to the challenge through whatever means necessary. And the following week, we’d do it again, high on the adrenalin we got out of being just two pages (or less!) ahead of the delegates. The sales team could sell what they wanted without worrying too much about whether it was possible. Hence, we landed a contract to deliver Lotus Notes email training to a national law firm, despite the fact that (a) no-one in the company had ever seen Lotus Notes; (b) we didn’t own the software; (c) we only had two weeks to write the course; and (d) it took over a week for us to work out how to install a Notes server.

Occasionally, there were failures. Mostly, these involved the death trap that was Microsoft’s Supporting Windows for Workgroups 3.11 course. Many a good trainer was dashed against the rocky shore of Microsoft’s bizarre networking stacks. For the record, the first time I ever made one of my delegates cry was when I was teaching this course.

The only one who’s going to look stupid is you

When you stand up in front of a room full of people to teach a course, you either know your stuff or you don’t. Once you shut the door to the training room, there’s not a huge amount anyone else can do to help except mop up the tears afterwards. Delegates need to trust you, and each question you fail to answer and each time you’re caught out in a mistake damages that relationship with your class. As a fresh-faced youngster trying to teach project management software to battle-weary engineers, I would start each course confronted by a roomful of sceptical stares, and I couldn’t afford even the slightest lapse.

Therefore, you came prepared. You knew the material, you knew the exercises you were going to walk through and all the ways they could go wrong. You anticipated the questions that would come up. You played with the software until you broke it, then you worked out how to put it back together again. It was an extreme lesson in self-reliance, and it fostered skills in being able to assimilate knowledge quickly (even if you only retained it short term) and in thinking on the spot. I’ve always thought that the way in which OCG trainers prepared for courses was similar to the way in which barristers prepare for court appearances (assuming This Life is representative…): late night sessions spent mastering the brief, followed by six hours of winging it in front of a potentially unforgiving audience.

One of my colleagues used to say that a good course was one where none of the delegates stood up, pointed at him, and shouted “Charlatan!”

Of course, there were techniques you learned to get by. You would tend to get delegates who would start questions with “Isn’t it true that…” or “Doesn’t that mean that…” followed by five or six sentences of unintelligible geekspeak. One of my fellow trainers would typically respond with a non-committal shrug and the answer “Superficially, yes.” I was never sure whether he entirely understood the question he had been asked.

Training taught you that completing something to an ‘okayish’ standard just wasn’t good enough. This extended not just to preparing for training deliveries but to all aspects of work. Fifteen years before Lynne Truss and the Internet grammar police, one of our Directors had a reputation for copy editing and fact checking course material with a whiteboard marker. Your draft might come back to you with just a line through it and the comment ‘Rubbish’. Your grammar mistake could end up being emailed to the whole company with a rant about why it was wrong. This encouraged you to get it right before you handed it in.

Find your niche and you will go far

If I’ve given the impression that OCG was staffed entirely by fraudsters, then let me correct that. There were a large number of devastatingly clever people working at the company, and we traded on a reputation for technical excellence. We literally used to teach the Microsoft support engineers about Microsoft products. Two people wrote a highly-regarded course about NT4 recovery – how to understand what those blue screen errors are saying and what to do about it. Another colleague could speak at conferences on the depths of C++ or the intricacies of TCP/IP with equal skill. Another knew not just how many worksheet functions were available in Excel 5, but also what all of them did, including the obscure statistical ones. When not madly preparing for the following day’s training, we had the luxury of time to dig deep into our chosen products. Little was done internally to manage the skills profile of the trainers, but we naturally found niches for ourselves and pursued them.

This was an age before the Internet became the indispensable IT resource that it is today. We learned to read the product manual in the same way you might read a history book rather than the way you might read a maths textbook. We questioned it — surely that statement can’t be true, the product can’t work like that. No, it doesn’t. So how does it work? We would work back to basic principles and posit how the system ought to work, and then test to see whether it did. It’s a problem solving approach I still employ today with real-world IT problems.

There has to be give and take

My final two points here are observations about management lessons I learned at OCG.

Working for OCG was demanding. If you were training, that was generally not in Oxford. After travelling back to the city, you’d probably go back to the office to check email and deal with some admin tasks. Not all the trainers had laptops, and downloading email over a 56k modem could take so long that it was quicker to go back to the office than to try to work remotely. If you went into the office on a Saturday, you could be fairly sure that you wouldn’t be the only one, and the trainer’s prep room was often as busy at weekends as it was during the week.

Last year, we replaced everyone’s PC or laptop at work. As I walked around the office at midnight, checking that we hadn’t missed any of the machines due for replacement that night, I would come across several of our graduate trainees, still working away, keen to give the right impression to their employer and hoping to secure a permanent position at the end of their training. As I wondered how on earth they managed to keep this up for an extended period – and indeed why they put up with the workload – I realised that their working life was little different from the way I’d started out at OCG. They’re young, they have energy, and most have the flexibility in their personal lives to be able to make this work commitment.

OCG sometimes asked impossible things of its staff. One group of trainers found themselves on a fourteen-week project teaching the same one hour course five times a day and living out of a hotel at a motorway service station. Sometimes the margins on the training we were delivering didn’t allow for us to book accommodation. One client in Greenwich comes to mind – and so you’d spend two-and-a-half hours driving to the training venue each morning, and then the same or longer driving back at the end of the day, every day for a week. A last minute schedule change could see you booked to teach a new course at short notice, and everything in your life would have to make way so that you could prepare.

If you want your staff to have this kind of commitment, you have to give something back. I think OCG got this right. We paid well (in my opinion). We did pay reviews twice a year because that was how fast your value to the company changed. And if you weren’t booked to teach a course, there was flexibility about things like what time you arrived in the office. After courses finished on a Friday afternoon, you’d have to stay on to build the PCs for the following week’s training, and step 1 in that process was to go to the fridge and grab a beer – which is great from a staff morale perspective but not so good in terms of making sure the training rooms were set up properly!

There were other little things as well, such as making the pool company cars available to staff at weekends or providing a company punt (well, it was Oxford after all). OCG was a small company at the time, and these kinds of things are generally much more difficult to achieve as the organisation gets bigger. But I think it was also part of the management ethos — it doesn’t surprise me that in the video tour of OCG’s current premises, you can spot a table football game in the background.

Without this give and take, as an employer you simply cannot make the same level of demands on your staff. Good will runs out, staff become inflexible or leave, and you can find yourself with problems on all sides.

Not all groups of co-workers are teams

Eventually, OCG recognised the need to send its managers on some kind of management training. One of these was among the best training courses I’ve attended (and I am really bad at attending classroom training that I’m not teaching myself). But we reached an interesting point right near the start of the course. The module was all about encouraging teamwork, and started with a question about what constituted a team… and it was arguable whether the trainers within OCG actually fitted into the definition.

We agreed a collection of criteria for what constituted a team. There had to be a shared goal or common purpose, and there also had to be an element of co-dependence. It was on this last point that the test broke down. We had selected and developed a group of independent, self-reliant people who spent their working days stood in a room with the door closed between them and their colleagues. Yes, trainers would support each other where they could, even if only by listening to the latest horror story of the delegate from hell or a nightmare course. Absolutely, there was camaraderie, one of the coping mechanisms that we used to deal with the non-trivial levels of stress involved in the job. But that’s not necessarily the same as being a team. We were perhaps like a group of independent contractors who spent a lot of time with each other.

I don’t present this as a bad thing – it was just a fact, and once you understood it, it explained some of the ways in which the group behaved and why some of our management approaches weren’t working. Equally, you can use this definition the other way round: if you have a group of co-workers and you want them to act more like a team, you should start by looking at whether the pre-requisites exist. Do they share a common purpose? If so, make sure they understand that. In what way are they co-dependent, or how can you introduce that co-dependence?

I blame Hugh

Looking back twenty years, I know that I wouldn’t be doing the job I have today but for the fact that one day in June, I pointed out the potential grammar confusion in a sentence about “the solitary swans lake”, I didn’t get flustered by Ken trying to take me off script five minutes into my presentation, and I was still stood at the front of the room at the end of a group discussion on train wheels. A very productive day.

Update: US spelling of “sceptical” removed to please Ian Davies.

November 19, 2013
by Jon
1 Comment

Reporting choices and date handling in ServiceDesk Plus

When you’re trying to get data out of ManageEngine ServiceDesk Plus, you basically have a choice between:

  • Built-in reports, which will generally provided something close to what you need without actually ever being precisely what you need
  • Your own custom report, created by choosing the fields you want, filters, grouping and sorting from a series of drop-down lists. These can be a great way of getting started, but I often find that they can be quite limiting in terms of which tables you want to draw information from.
  • Query reports, where you write your own SQL statement to query the underlying database and run it from within the SDP UI itself
  • Querying the database directly from your database’s own querying tool, which has the most flexibility but which also requires you to do the most work.

Except for the most straight-forward queries that can be generated using the custom report tools, I will typically start from Microsoft SQL Server Management Studio and write the query from scratch. If I’m likely to re-use the query or I need to pretty up the results, I may see if it can be re-worked so that it could run as a Query Report inside SDP.

Query Reports have a number of limitations:

  • They must be a single SELECT statement. As far as I can tell (please write in if I’m wrong), there is no way to return multiple tables of data in a single Query Report. Nor is there a way of running a series of commands that eventually return a single table of data. Nor can you do something like executing a stored procedure that returns a single table of data.
  • You cannot use line breaks to help format the SELECT statement as you write it. I burned more time than I would like to admit to in discovering that one, and it makes writing Query Reports much more difficult than they really need to be, especially when you remember that there’s no colour coding in the SDP query editor, unlike external tools.
  • You have to give every column in the output a name, which means paying attention to any calculated columns and making sure you provide a column alias.
  • If your query fails to execute for whatever reason, you will get a reasonably unhelpful display of the Java exception that was thrown by SDP rather than the slightly more meaningful SQL error that underlies it.

Using Query Reports, you get the advantage of some help with date handling. If you’re using date criteria in your query, particularly for something that will be used as a scheduled query, you can use <from_thisweek>, <to_thisweek>, <from_today>, <to_today> and various others as listed on the helpcard. I’m cautious of these because I’ve yet to see any documentation on precisely what they mean. What do we class as “today” in a system that’s being used in more than one timezone? What is “this week” or “last week” – does it start on a Monday? A Sunday? And at what time – UTC or local time?

Much more useful are the functions LONGTODATE and DATETOLONG. All dates in SDP are stored as bigint and show the number of milliseconds since 1 January 1970 00:00:00 UTC. This is great for internal use, but not at all useful for reporting purposes. If you use a Custom Report, SDP will automatically format these columns as dates for you. If you’re creating a Query Report, you need to do the conversion yourself. LONGTODATE will convert an SDP bigint field into a date, and the SDP reporting engine will then format it for you. DATETOLONG can be used when you’re crafting the WHERE clauses of your SELECT statement, as in:

...WHERE wo.CREATEDTIME > DATETOLONG('2013-11-01 00:00:00')

When I call DATETOLONG, I will tend to specify the date format as above, including time, to avoid any risks of d/m or m/d confusion. All dates in SDP are stored in UTC – remember this when writing your queries.

If you step away from Query Reports in SDP to writing your own SQL statements in your database querying tool of choice, you lose access to the DATETOLONG and LONGTODATE functions. There are some functions in the database (at least, there are in the MSSQL database) that you can try to call.

First, unix_timestamp converts from a date string (eg ‘2013-11-23 17:30:00’) to a bigint.

CREATE FUNCTION [dbo].[unix_timestamp]
  @dateString varchar(25)
RETURNS bigint
  RETURN datediff(s, CAST('1970-01-01' AS DATETIME),
           CAST(@dateString AS DATETIME)) -
           (select dd from sdp_DateDiff)

The first thing to notice about this is that it’s out by a factor of 1000. Dates are stored as milliseconds since 1 January 1970, not seconds, and so anything that calls this function needs to multiply the output by 1000. The second thing to look at is the reference to sdp_DateDiff. This turns out to be a view within the database:

CREATE VIEW [dbo].[sdp_DateDiff]
  SELECT dateDiff(s,getutcdate() ,getdate()) as dd

This approach is necessary because you cannot call getutcdate() from within a function on Microsoft SQL Server. What it does it a sort of adjustment from UTC to the local time for wherever your SQL Server is. My brain starts to melt when I try to work out the timezone maths, but I suspect this approach isn’t perfect, but it’s close enough. I’d also note that in some multi-time zone companies, the time zone of the SQL Server won’t be the most appropriate location to refer to.

The reverse function is from_unixtime:

CREATE FUNCTION [dbo].[from_unixtime]
 @dateValue bigint
  RETURN dateadd(s,(select dd from sdp_DateDiff) +
         (@dateValue),'1970-01-01 00:00:00')

Again, the function is out by a factor of 1000, and again there is some attempt to address the difference between UTC and the local time on the SQL Server.

Personally, I don’t call either of these functions. Instead, I use the following pairs of functions to date to bigint conversion. They remove the issue of being out by a factor of 1000 and also work purely in UTC – partly because I live in the UK and this is local time for me at least half the year, and partly because I distribute reports to people in at least three different time zones, and it’s easier for them to convert from UTC in their heads than other time zones.

  @Input bigint
RETURNS datetime AS 
  RETURN dateadd(s,(@Input/1000),'1970-01-01 00:00:00')
  @Input datetime
RETURNS bigint AS 
  RETURN CAST(datediff(s, '1970-01-01 00:00:00',
         @Input) as bigint) * 1000

I don’t profess to be a T-SQL expert, so feel free to suggest improvements in the comments below.

Now, once you start dealing with dates, it’s only a short hop from there to wanting to calculate the elapsed time since a given date or between two dates… but we’ll leave that for another post.

November 18, 2013
by Jon

Whatever you’ve got planned, forget it. I’m the Doctor. I’m 904 years old. I’m from the planet Gallifrey in the constellation of Kasterborous. I am the oncoming storm, the bringer of darkness, and you are basically just a rabbit, aren’t you?

A rabbit



The inside of a hard disk

November 18, 2013
by Jon
1 Comment

Managing disk space usage for ServiceDesk Plus

Before you install ManageEngine’s ServiceDesk Plus, you’re going to need to understand the way in which the system consumes disk space to make sure that you’ve allowed for growth. There are some options for managing disk space after installation, but the longer you leave it (and the less free disk space you have when you want to make any change), the harder it will be.

The database

I’m not going to talk very much about this, mainly because there are well-trodden paths for managing the space consumed by databases and their log files, and for controlling the specific drives that those files sit on.

Predicting disk space usage for the database and logs is also difficult. It will, obviously, hugely depend on how you’re using the system: for example, how many Requests per month, how you’re using email within SDP, whether you’re scanning PCs or other network devices, and how long you’re retaining that scan history.

Application files

ServiceDesk Plus requires a minimum of 20gb free hard disk space. We know that because it says so in the System Requirements. But then those of us with long enough memories will also know that Windows XP will run with 64mb RAM. Well, it will drag itself across the ground on 64mb RAM anyway. So perhaps it’s worth looking beyond the official system requirements.

A plain installation of ServiceDesk Plus is going to consume around 500mb of disk space. If you’re running MSSQL or MySQL on the same server, you’ll want to consider the disk space needs for your database application as well.

The main thing that will increase the size of the application installation is patching SDP. Every time you apply a patch, SDP is going to:

  • Take a copy of the PPM file (the patch file) and copy it into the \ServiceDesk\Patch folder
  • Expand out the PPM file into its component files into a sub-folder in the same location.

You should allow at least 100mb per patch file, and how many of these you have will depend on whether you’re installing every build as it’s released or making larger jumps. To give some context, version 8.2 was released at the beginning of April 2013 and in mid-November, we’re now on patch 13.

SDP does not clean these files up after the patch has been successfully installed – they’re there permanently. That said, given that SDP does not support uninstalling a patch after installation has completed, you may feel that you can go in manually and trim some of the older patch files and their respective sub-folders. I’ve certainly done this when I’ve been short of disk space to no ill effect (so far!)

File attachments

Attachements to tickets (and other things) are not stored within the database. Instead, they’re stored in the file system.

Firstly, the \fileAttachments folder is located by default under the \ServiceDesk folder. This contains any files attached to:

  • Requests
  • Problems (the Impact, Symptoms, and Root Cause fields all allow you to attach files)
  • Solutions
  • Notifications and Conversations (inbound and outbound emails relating to Requests)
  • Contracts (if you’re using the Contracts module, obviously)

How much disk space will you need for these things? It depends. Sorry, not very helpful, but there you go.

If you’re interacting with users via email, then you may see a lot of growth here. We get a lot of large Word documents, PowerPoint presentations and PDF files attached to incoming Requests. Often, these need to be modified and sent back. Very roughly, the \fileAttachments folder has grown 1gb for every 10,000 Requests. 60% of that is in the \Requests folder, 25% in the \Notifications folder, 10% in \Notifications, and not very much anywhere else.

Your mileage may vary.

Since build 8114, SDP has allowed you to configure the location of the \fileAttachments folder. This is done from Admin | Self-Service Portal Settings, a strangely named option in the admin interface that contains a number of settings they couldn’t find anywhere else for.

SDP Attachment Path option

Option to change the fileAttachments location in ServiceDesk Plus

This allows you to move what could be a rapidly growing data folder off the OS partition and onto a data partition or even a network share. If you’re taking it off the SDP server itself, make sure that the link between the SDP server and the file server you choose is fast enough, although since all of these files are downloadable attachments rather than components of the page itself, putting the data on a file server shouldn’t affect page load times.

The best time to make this change is straight after you install SDP, but in fact you can do it at any time and it is extremely painless. You change the path in the option above, click Save, and SDP will move all the files from the current attachment path to the new one. It creates a new \fileAttachments folder in the location you specify here.

Inline images

Whereas the \fileAttachments folder contains files that have been attached to Requests, Problems, etc, the \inlineImages folder contains embedded images from Requests and other modules. If a user pastes a screenshot from their computer and emails it to you, or you paste an image into the rich text editor fields in SDP, those images are stored in this location. These files can relate to:

  • Requests (using the common SDP alias of WorkOrder for the folder name)
  • Problems
  • Solutions
  • Conversations (inbound and outbound emails relating to Requests)
  • Signatures (the E-mail Signature Technicians can set by clicking Personalize)

Unlike the \fileAttachments folder, there is no option to relocate the \inlineImages folder to another drive. The reasoning is probably because access time for these files will affect page load speed, but it presents a problem. The size of this folder can be significant, depending on usage. For us, this folder is 20% of the size of the \fileAttachments folder.


This is where it all gets tricky.

The SDP backup process effectively builds a large zip file made up of the following:

  • Everything from the \fileAttachments folder
  • Everything from the \inlineImages folder
  • Everything from the \custom folder – images and CSS files for your SDP instance’s appearance
  • Select files from the \bin folder
  • Select files from the \server folder
  • Licence info
  • SQL files representing the data in your SDP database

The default configuration is that the backup files are built and stored in the \backup folder. This starts with generating the SQL scripts for the data backup. Then, the backup process zips up all the other files listed above before finally adding the SQL files as well. It then deletes the SQL files.

It is possible to change the backup location for scheduled backups only – under Admin | Backup Scheduling, click Edit Scheduling and specify a backup location. There are some limitations to this config change that you need to be aware of, however. First of all, it applies to where the final backup file is placed, and not necessarily to all of the temporary files created along the way. Importantly, – which contains a compressed copy of all the file attachments and online images –  is created under the \servicedesk folder, regardless of where your file attachemnts are being stored and regardless of the backup location. This can place significant demands on the application partition. Once all the file attachments and online images have been compressed into this file, it is incorporated into the backup file itself and deleted. Precisely how this incorporation in executed is unclear, but I have noticed that where you are backing up to the default folder location, the amount of free disk space required on that partition can be as much as twice the final backup file size, so I suspect the operation is more copy than move.

The other limitation to this confg setting on backup location is that it only applies to scheduled backups: it does not apply to manual backups (ie, those created by running backupdata.bat in the \bin folder), nor does it apply to backups created whilst applying a patch to SDP. On that last point, I’d note that prior to build 8213 there appeared to be a bug that meant that even if you said ‘No’ to the option to backup as part of the patch process, it performed a backup anyway. Also, there seem to be certain upgrades, such as when going from 81xx to 8200, where you aren’t even offered the choice not to perform a backup.

In larger SDP implementations, especially those with large volumes of attachments and inline images, it can be the backup process that will cause you the biggest capacity issues: even if you have moved the \fileAttachments to a different drive or to a network location, you will still need to allow space for it on the drive containing your SDP application for those occasions where you have to do a manual backup or a backup as part of an SDP patch. (Of course, there are some ways to cheat if you really have to.)

Our experience – and again, yours may vary, is that we’re getting around 25% compression in the backup files – so if you total the \inlineImages folder, \fileAttachments folder, and database, your final backup size will be around 75% of that. You may need an extra 10% for working space whilst the backup is in progress. I’m assuming that you will have a process in place to move the backup file off the SDP server, so I’m mostly concerned about the working space required here. For those of you retaining your backups in place, I’d also point to the option within Backup Scheduling to have SDP purge backup files older than a given age.

Anything else

There are a few other bits and pieces that will take up space, but they’re all second order compared with the three areas above. For example, when you generate reports to be emailed as attachments, SDP creates the files within the application folder and doesn’t clean them up itself. There are log files, but only the most recent logs are retained, so there is a limit to how much space they will consume.


As with any server installation, it’s important to consider disk capacity requirements in advance. There are some things you can do to control where ServiceDesk Plus places its files and where, therefore, that disk space is consumed. However, there are also some limitations. You need to watch the amount of free disk space available on the application drive carefully otherwise you can end up in a mess.

Image credit: Norlando Pobre