No evidence of Balance: the Joint Committee on draft Investigatory Powers Bill

The Joint Committee on the IP Bill has now been stood up, and we’ve finally got the names of the Lords appointed. Following on from an underwhelming start as I’ve previously noted I continue to be underwhelmed, maybe even dismayed, by the Lords appointed. I hope to be pleasantly surprised, but am not confident. Fundamentally, the committee appears to have a pro-authoritarian slant, and has virtually no experience with technology – not a great combination.

Before I discuss the membership in detail, I also wanted to make a point on time. The joint committee is due to report by 11 February 2016. That gives at most 7 weeks for the committee to review the draft bill, and report. This is not much time, especially with Christmas and New Year in the middle of the period. It may be sufficient, but this is definitely something to keep an eye on.

And now to the membership.

Membership Overview

From the perspective of the Lords, there are 2 Conservative, 2 Labour, 1 Crossbench, 1 Bishop(!), and 1 Liberal Democrat. 3 of the 7 have been government Ministers, and 1 was the Head of the Civil Service. None have any in-depth technical knowledge. Overall, the Lords’ contingent is definitely an ‘insiders’ group – indeed 2 are or were members of the Intelligence Services Committee. When looking at speaking history for DRIPA, the draft IP Bill, and the Anderson report, most have been silent, showing little interest in the subject. Only Lord Strasburger appears to have a pro-civil liberties stance, and only he had involvement with the previous draft Communications Data Bill.

When we include the MPs, there are 6 Conservative, 4 Labour, 1 SNP, 1 LibDem, 1 Crossbench, and 1 Bishop. A minority (1 MP+3 Lords) have spoken on DRIPA, the Anderson Report, or the IP Bill. The overall committee are less insiders (4 Lords+1 MP) than the Lords’ appointees would suggest, but there remains (in my estimation) a very authoritarian slant – I can only point at 2 (Stuart McDonald MP, Lord Strasburger) who are likely to have a more civil liberties view.

Lords Appointees

Baroness Browning (Conservative 2010, was Minister for Crime Prevention and Anti-Social Behaviour Reduction, Home Office (2011))
Wiki TheyWorkForYou
Hasn’t spoken in any of the recent related debates. Expect to be pro-existing bill/authoritarian.

Lord Butler of Brockwell (Crossbench 1998, was Civil Service (Head of, 88-98), ISC 2010-15)
Wiki TheyWorkForYou
Was pro-DRIPA, although against the emergency process. Spoke on Anderson report, with mixed views. Was affected by IRA Brighton bombing. Expect to be relatively authoritarian, but may bring useful civil service views.

Bishop of Chester (Bishop 2001)
Wiki TheyWorkForYou
Has no relevant experience – not sure why selected. Did speak on the Anderson report. Seems generally rather pro-authoritarian, and while likes privacy, willing to give it away. Similar views in Counter-Terrorism and Security Bill.

Lord Hart of Chilton (Labour 2004, was Solicitor)
Wiki TheyWorkForYou
Barely speaks in debates. Has committee experience of legislative scrutiny. Unknown views.

Lord Henley (Conservative 1977, was Minister of State, Home Office (2011-12) – Crime Prevention and Anti-Social Behaviour Reduction)
Wiki TheyWorkForYou
Barely speaks at debates. Sits on Joint Committee on Human Rights, but am not sure of impact in that role. Expect to be authoritarian.

Lord Murphy of Torfaen (Labour 2015, was Sec State Wales/NI, Shadow Defence, sat on ISC 2001-08)
Wiki TheyWorkForYou
Has voted for mass retention before. Hasn’t spoken in any relevant debates. Expect to be very authoritarian.

Lord Strasburger(Liberal Democrat 2011, was Private Sector, sat on Draft Communications Data Bill committee)
Wiki TheyWorkForYou
Has been significantly involved in all related legislation. Pro-oversight, pro-civil liberties. Only member with experience of draft Communications Data Bill.

Improving OpenVPN security on Synology NAS

This guidance refers to DSM 5.2-5592 Update 4, with VPN Server 1.2-2456, and the official Android client v1.1.16 (build 74).

When setting up a VPN on a Synology NAS, you can make a choice between PPTP, OpenVPN, and L2TP/IPsec. For assorted reasons, I chose OpenVPN. However, I was underwhelmed with the security stance of the default Synology configuration. Specifically, the default was TLS/1.0, and used a username/password combo for authentication. TLS/1.0 has issues – the current best-practice is to use TLS 1.2. Just relying on a username/password opens you up to brute-force attacks, especially if you use a weak password as many people do in their intranet.

I have changed this configuration to use TLS 1.2, and TLS-authentication. I opted not to use a user key.

Below I have documented how to install and configure OpenVPN at this security level on a Synology NAS. I am using CloudStation to distribute files between my NAS and my clients – other approaches such as SMB shares would also work.

1) Install the VPN Server

Identify and set up a way to distribute files from the NAS to your client computers (e.g. phone, laptop, etc). I used CloudStation, with the Android and Windows DS Cloud clients.

Set up Dynamic DNS. Go to Control Panel, External Access, DDNS, and click Add. Follow the relevant instructions. Make a note of the hostname you pick. Alternately, from your home network browse to WhatIsMyIP.com or similar and make a note of your public IP address. Note: Most ISPs will give you a dynamic public IP address, which can change over time, hence the recommendation for Dynamic DNS.

Install VPN Log onto the NAS with admin credentials. Go to Package Center, Utilities, and click on Install for VPN Server (by Synology Inc).

Set up Port Forwarding. If your home router supports UPnP, go to Control Panel, External Access, Router Configuration. Click on Set up router, if prompted. When set up, click Create, Built-in application, and check the row which says “VPN Server UDP 1194 1194”. Click Apply, and then Save. If you encounter problems with this, you may not be using a UPnP server. In which case you need to go into your home router config, and set up port forwarding. You’ll want to forward traffic from your external IP UDP/1194, to the IP address of your NAS (e.g. 10.0.0.5) UDP/1194.

Optional: You may want to use a non-standard port rather than 1194. If so, you’ll either need to select a Custom Port in the router configuration page, or manually configure on your router. Just replace all mentions of 1194 in the above with the port you select, making sure you don’t use a port which is already in use.

Set up Auto Block. Go to Control Panel, Security, Auto Block. Check the “Enable auto block” checkbox, set the settings as appropriate. I recommend clicking on Allow/Block list, and adding the IP address of the computer you use to administer the NAS from to the “Allow List”. This will stop the NAS from blocking you even if you get the password wrong a few times. Click Apply when done.

2) Configure the VPN Server

GUI Setup

Go to VPN Server, General Settings, and uncheck “Grant VPN permission to newly added local users”. Verify that Auto Block is set up. Click Apply.

Go to VPN Server, Privilege, then uncheck all check boxes except the OpenVPN entries for the users you want to allow OpenVPN access. (Note: I’m assuming you’re not using PPTP/L2TP). I highly recommend you don’t allow admin to VPN in. Click Save when done.

Go to VPN Server, OpenVPN. Check the Enable checkbox, and set up your Dynamic IP address range etc. This must be a different subnet to your home network. If you chose to use a different port/protocol in step 1, change the Port and Protocol values. When complete, click Apply.

Click Export configuration – this will download a zip file to your local machine. Unzip that into your CloudStation folder.

Terminal Setup

SSH in. After doing the above, SSH into your NAS, as user “root”, using the same password as “admin”. If you cannot SSH in, go to Control Panel, Terminal & SNMP, and verify that “Enable SSH service” is checked, and configured as you expect.

Do the following commands, where $user is the username you’re using for cloudstation, and assuming the folders are in volume 1, and you unzipped the downloaded configuration into a folder called openvpn.


> cd /var/packages/VPNCenter/target/etc/openvpn/keys
> openvpn --genkey --secret ta.key
> cp ta.key /volume1/homes/$user/CloudStation/openvpn/
> chown $user.users /volume1/homes/$user/CloudStation/openvpn/ta.key
> vi /usr/syno/etc/packages/VPNCenter/openvpn/openvpn.conf

Add the following lines:-

tls-version-min 1.2
tls-auth /var/packages/VPNCenter/target/etc/openvpn/keys/ta.key 0

Save the changes (Esc, :wq), then optionally exit the SSH session.

Restart the VPN server. This can be done by going to Package Center, Installed, VPN Server, and Clicking Action->Stop, then when stopped clicking Action->Start.

3) Configure the client

Edit the openvpn.ovpn file in your CloudStation. Find the YOUR_SERVER_IP and replace it with the dynamic DNS hostname or IP address you identified in step 1. Then add the following line:

tls-auth ta.key 1

Save the file.

Upload the ca.crt, openvpn.ovpn, and ta.key files on to your phone – they all need to be in the same directory. If using CloudStation, this will be done automagically when your phone is on your home WiFi.

Install the client “OpenVPN Connect” package on Android. Run it. Press the three dots in the top right, and go to Preferences. Scroll down to Minimum TLS version, and set to TLS 1.2. Go back to the main screen.

Press the three dots again, and select Import. Then “Import Profile from SD Card”. Browse to wherever you downloaded the openvpn.ovpn file, and select it. Enter your username and password.

Disconnect your phone from your home WiFi, and make sure mobile data is enabled. Click Connect. Fingers crossed, after a few seconds, a connection should happen.

If you don’t connect, and no error is shown, try the following:-

  • Verify that you’re using the correct IP/hostname
  • Verify you’ve set up port forwarding correctly
  • If you can’t tell the above, try changing the protocol to TCP. This can be done via the Synology GUI, or by changing the “proto udp6” in the server file to “proto tcp-server”. You’ll also need to change the openvpn.ovpn line “proto udp” to “proto tcp-client”. Don’t forget to restart the server, and delete and reimport the client.
  • Verify that the changes you made manually to the server config are still present, by ssh’ing in and checking with vi. It’s possible that changing settings via the GUI will clobber any manual changes you have made.

4) Optional improvements

By using a non-standard port (i.e. not 1194) you’ll be less likely to turn up on port scans.

Using the ta.key with tls-auth means that anyone attempting to connect to your server will need that key. If you want to use a user key instead or as well as a password, that could add extra security.

By default, with TLS1.2, the connection seems to be TLS-DHE-RSA-WITH-AES-256-GCM-SHA384 which should be sufficient. If you want a different TLS cipher, first identify the string by SSHing to the server and running: openvpn –show-tls You can then set the selection through adding a “tls-cipher <: delimited ciphers>” in both the server and client.

The importance of specificity in Intelligence-related laws

Over the next week, I will be publishing my detailed thoughts on the  draft Investigatory Powers Bill Be warned – they’ll be long, and boring…

But before I do that, I want to discuss something which never seems to be covered. When discussing bills to do with surveillance and intelligence matters, there is always a discussion of the morality of the laws, of the interminable tug of war between privacy and safety. The debates in parliament often cover that, as well as some specific modifications, but what never seems to be discussed is how very different such bills are compared to most others, from a judicial and enforcement perspective.

The legal system in the UK is based around Common Law, generally through an adversarial system. I will below make the case that the legislation created for Intelligence and Surveillance related matters is insufficient, because of shortcomings in our legal system.

But first a bit of background… And a caveat – I am not a lawyer – the below is my understanding of the process and problems, and I would love to be corrected where I’ve made errors. Note: I have used civil liberties groups as an example of the opposition to government, but the relevant aspects could apply to any member of public.

Primary Legislation

Law generally begins with a need. The government decides that something should be made illegal, or should definitively be made legal. The government, or rather the specific departments, will provide a description of what they want to accomplish and pass this to the Office of Parliamentary Council. The OPC will draft a Bill. Eventually this Bill (after multiple iterations) will go through parliament, be voted on, and maybe become an Act of parliament, and law. See [1] for more details.

Secondary Legislation

An aim for Primary Legislation is for it to change slowly and rarely. However, the world changes – government departments are opened, closed, and disbanded. Technology changes. If the Primary Legislation is overly detailed, then parliament would spend all its time updating this legislation for minor tweaks rather than looking at the big picture. Most Primary Legislation therefore normally allows the government to provide minor updates, and more detailed instructions, through the use of Secondary Legislation.

This Secondary Legislation is limited by the Primary – i.e. the Primary specifically says what limited powers are conferred on the government. The Secondary Legislation, normally “Statutory Instruments” such as regulations, are written by the government and normally still need parliament to vote on and pass. However, these votes are generally quite pro-forma, and don’t have the large debates or proposed amendments that occur with primary legislation.

Common Law

A third class of law is created by the courts, rather than government. As cases are brought to the courts for judgement, case law [2] is created. Essentially, during the process of a trial the defendant and prosecution argue with each other (the adversarial system [3]). Ultimately the judge (and jury to a lesser extent) try to make a determination of what the law actually means, and whether the defendant is guilty or at fault. When a decision is made, case law is created – i.e. the court decides that the law, in this instance and any other similar/identical one, means x.

This case law can then be relied on for future interpretation of the primary and secondary legislation. Over time, a set of case law is created for any primary legislation, which will be much more detailed than anything parliament could, or would want to, create.

The Problem

Lack of case law

Intelligence related laws go through the normal process in their creation, both as primary and secondary legislation. However, I assert that they aren’t treated the same at the Common Law stage.

Intelligence related matters are necessarily secret. It is vital that the details of methods and techniques remain out of the hands of the country’s adversaries, as knowledge of them would allow these adversaries to avoid our intelligence agencies. This is a key reason why much intelligence-type surveillance is not allowed as evidence in trials. If included in evidence, then due to the adversarial system the defence would be able and indeed required to delve into how the evidence was obtained. As court proceedings are generally public, this would lead to sensitive information on methods and techniques becoming public.

Under some Acts of parliament, evidence may be introduced in secret, at closed hearings. A ‘special advocate’ is normally nominated to argue the defendants case in such a situation – however it should be noted that the defendant themself generally doesn’t know what happens in such courts, nor do their lawyers. There is therefore a lot of nervousness about whether the ‘special advocate’ is doing their job and has access to all relevant information. Furthermore, the detailed conclusions of such hearings do not become public, leading to such either not becoming case law, or leading to a secret set of case law such as that created by the US FISA courts [7].

Therefore, the main route by which intelligence-related law is tested in the courts and case law created, does not occur.

An alternate route to bring such laws into review and interpretation by the courts is through the public either suing the government because they believe the law has been broken (e.g. Amnesty and others over surveillance[4]), or seeking a judicial review if they think the process by which a law has come into effect was incorrect (e.g. David Davis MP and Tom Watson MP over DRIPA[5]).

A judicial review can only be used if there has been an error in process, in the case above the error being that EU law wasn’t correctly applied/followed when creating DRIPA. The result will generally to quash, or allow, law or specific parts. It will not, I believe, generally result in case law about the interpretation of meaning existing law.

The public can only sue if they have evidence that wrongdoing has taken place. Due to the secrecy inherent in intelligence matters, such evidence does not generally become public. Subjects of surveillance are not, as a rule, aware that they are under surveillance, irrespective of whether it is lawful or not. The suit brought by Amnesty et al was only possible due to the Snowden leaks.

Ultimately therefore, except when egregious errors are made in process, or whistleblowers leak possible areas of unlawfulness, the courts do not get to see these matters in public, and so no case law can be created.

Difference of opinion

Another way of saying the above is that there is no way to clarify what the government thinks a law says, and whether that tallies with what the public thinks it says. Primary Legislation is very vague, and Secondary Legislation is often not much less so. Furthermore, Secondary Legislation generally goes through much less rigourous examination.

A concrete example is that of the phrase “external connection” in RIPA. The government believed it referred to any communication with an external endpoint, including any servers the data routes through. So, for example, if your email server is external to the UK, then it is an external connection, even when using that email to talk to another person in the UK [6]. This was at odds with what a lot of people, including civil liberties organisations, believed to be the case.

Due to our adversarial system, a judge cannot act as inquisitor, delving into the truth. Instead, they remain an impartial arbiter as two parties fight to convince the judge of their interpretation. Without the laws going through the courts, there is no opportunity for this fight, leaving the legislation wide open for interpretation, and without any realistic check or balance that the government is interpreting. Oversight bodies are limited in their powers. They additionally run the ever-present danger of internalising the government’s interpretations (especially within, for example, the Intelligence and Security Committee of Parliament) without realising they are doing so.

Possible Solutions

Ultimately, I think a combination of things are needed for Intelligence-related (which includes Surveillance, such as the draft Investigatory Powers Bill) legislation. This includes changes in the way that such legislation is drafted, the government being more open of interpretation, and ways to create case law outside of traditional approaches.

The first item needed is greater specificity in both primary and secondary legislation. This runs the risk of creating law which needs changing more often, and so a case can be made that this should be done in regulations rather than the bills themselves. However, it must be recognised that secondary legislation normally go through on the nod, without much or any debate. If specifics will be implemented in secondary legislation then there must be a recognition that more debate and review will be needed at that stage.

The next is that the government should be open about interpretation of law, even when it applies to potential methods and techniques. This will help build trust between civil liberties groups and the government, and will also help the government avoid situations such as that which the IPT found in the Amnesty case – that the government had been breaking the law but that due to the leaks of Snowden it was now not doing so, because the leaks had made public facts that should already have been public.

Finally, there must be a recognition that the courts do not have the opportunity to create case law in these matters – a situation the current draft Investigatory Powers Bill makes no better, and indeed s171(3) of that draft may make worse. Alternate approaches should therefore be considered. For example, an approach somewhat akin to Moot courts [8] where civil liberties groups and government can work together to introduce representative test cases, with the government taking part in a neither-confirm-nor-deny approach with respect to methods and techniques actually being used. The results of such moot trials could be allowed as case law, which the government would be required to treat as real case law.

I submit that the status quo is insufficient, and has contributed to the current breakdown in trust between the people and government. We must look outside normal practices, while staying inside established principles of legislation and jurisprudence, in order to help heal this wound. Failure to do so will only lead to increased recriminations on all sides.

[1] https://www.gov.uk/guidance/legislative-process-taking-a-bill-through-parliament
[2] https://en.wikipedia.org/wiki/Common_law
[3] https://en.wikipedia.org/wiki/Adversarial_system
[4] http://www.ipt-uk.com/docs/Liberty_Ors_Judgment_6Feb15.pdf
[5] https://www.judiciary.gov.uk/wp-content/uploads/2015/07/davis_judgment.pdf
[6] http://www.theguardian.com/world/2014/jun/17/mass-surveillance-social-media-permitted-uk-law-charles-farr
[7] https://en.wikipedia.org/wiki/United_States_Foreign_Intelligence_Surveillance_Court#Secret_law
[8] https://en.wikipedia.org/wiki/Moot_court

An underwhelming start on IPBill

So, the Draft Investigatory Powers Bill has now been released. I’m in the process of working through the draft myself, and will post something here soon. In the interim though, the House of Commons has nominated 7 people to sit on the joint committee of Commons and Lords, to discuss the draft. The names are below.

At a first look, I’m pretty underwhelmed. The makeup (4 Con, 2 Lab, 1 SNP) reflects the breakdown of MPs (not public vote %) which is pretty standard, but I’m disappointed there’s no Lib Dem. The LD have been easily the most vocal party for civil liberties, and killed the outrageous snoopers charter. Maybe that’s why they’re not included.

Furthermore, it’s of note that 4 of the 7 are new MPs (4 Con, 1 SNP), and so it’s to be expected they’ll do what their party bosses require of them. Only 1 (Suella Fernandes) commented on Wednesday’s debate on the bill. The rest seem to have no real interest in the subject, or applicable knowledge (I’ll come back and edit this when I read more). In the interim, below are the people, with links to their TheyWorkForYou profiles.

EDIT: I’ve now had some time to look into their profiles. Generally relevant-ish qualifications – there’s a load of lawyers but only 1 person with any technology knowledge, and he was just a journalist who specialised in consumer technology. Most appear likely to follow party lines, overall there’s definitely a pro-authoritarian slant.

Victoria Atkins [Con, 2015-]
TheyWorkForYou

Barrister (Serious & Organised Crime) will have good relevant knowledge. Expect to be pro-authoritarian.

Suella Fernandes [Con, Barrister, 2015-]
TheyWorkForYou
Debate

Suella may be a good pick. Has knowledge of the law, and at least some interest, despite being a fresh MP. Knowledge of international (US) law.

Mr David Hanson [Lab, 1992-]
TheyWorkForYou

2010 Shadow Minister at the Home Office. Experienced MP, has some knowledge/experience. Expected to be pro-authoritarian (has previously voted for ID cards, and for Data Retention)

Stuart C. McDonald [SNP, 2015-]
TheyWorkForYou

Has worked for immigration services as a Human Rights Solicitor. May be balanced in views.

Dr Andrew Murrison [Con, 2001-, voted against Iraq war]
TheyWorkForYou

Voted against Iraq war, which took balls as a Conservative. Voted for data retention but against ID cards. Not sure of views, but unlikely to be cowed by whips on moral matters.

Valerie Vaz [Lab, 2010-]
TheyWorkForYou

Has law experience. Seems not to have had an interest in surveillance etc, and has voted in line with government. Not sure why picked. Likely to follow the party line.

Matt Warman [Con, 2015-]
TheyWorkForYou

Only person nominated who has any knowledge of tech (was previous Consumer Technology Editor at the The Daily Telegraph newspaper. Sits on the Science and Technology Select Committee. Probably shallow knowledge of tech.

DRIPA disapplied following judicial review

I told you so :)  (see previous DRIPA commentary when I said “This bill doesn’t address the shortcomings highlighted in the ECJ ruling, and so it would inevitably be over-ruled in the future.”)

The UK High Court has just ruled that DRIPA section 1 (data retention) has been ruled inconsistent with European Law. As such, they have disapplied that section of the law – essentially making it no-longer be law. They have however suspended their ruling until March 2016, in order to give the UK government time to respond.

For most of those interested in the subject, this was no surprise. DRIPA was rushed through and didn’t appear to mitigate the issues that had previously caused the ECJ to rule the EU Data Retention Directive invalid/unlawful. It is a kick in the teeth to the government, and will help civil liberties campaigners who had always asserted that DRIPA shouldn’t have been rushed through the way it was.

What is of real interest now is what this means for the upcoming interception/surveillance bill, due to be introduced in Autumn 2015. This bill is aimed at updating RIPA, merging in DRIPA, and potentially (as recommended in both the RUSI and Anderson reports) simplifying the interception/surveillance laws in the UK. There was already a hard deadline for this new bill to receive royal assent – DRIPA has a sunset clause of December 2016 – and many people had already indicated that it will be a rush to get this bill through by then, given it’s scope. Trying to do the same before March 2016 will be a nightmare, especially given the large number of aspects where many MPs and the general public are diametrically opposed.

So, what will the government do? Firstly, I expect them to appeal – they’ve been given the right to do so, and they lose nothing by doing so. Assuming the appeal fails, they’ve a few options:

  1. DRIPA #2: Rush through a hack to fix DRIPA. In which case, will they keep the existing sunset clause, or try to extend it? Any expedited action would be very unpopular amongst MPs – even those in favour of broad interception etc powers were upset by the government’s tactics last time. Likewise, any attempt to extend the sunset clause would be very unpopular, despite that any DRIPA #2 would take up valuable time in the parliamentary calendar.
  2. Compress RIPA-replacement timescale: Rather than aiming for a December 2016 Royal Assent, they could aim for a March 2016 one. This would be feasible, but non-trivial. The committee stages would need to be greatly shortened. It would also leave the government to procedural actions to delay progress, which could lead to them accepting pro-civil-liberties amendments. It may also require a reduction in the scope of the proposed legislation, so that it will just be a RIPA(+DRIPA) replacement, rather than also covering all other ways that interception can legally take place.
  3. Keep to existing timescale: They could just accept that all the extra data that the government wants retained under RIPA could be lost between March 2016 and Dec 2016. Note that this doesn’t mean they won’t be able to access retained data – they still can using RIPA – nor that companies won’t retain data – they still will as they may need it for their own internal use – but it will mean that companies may (or will, due to the Data Protection Act) stop retaining any extra data that the government had previously required they do. The government and intelligence services wouldn’t be happy with this, but they could quite quickly contact the telecoms providers and see what data will be lost – it may well be a manageable amount. However, it would be politically bad, as the fact that the intelligence services and police could get by without this data would help the civil liberties argument that they don’t need the data.

I honestly don’t know which of these will happen. My gut says (2), or (3) if the data lost isn’t vital.

The actual judgement states that:

The order will be that s 1 is disapplied after that date:
a) in so far as access to and use of communications data retained pursuant to a retention notice is permitted for purposes other than the prevention and detection of serious offences or the conduct of criminal prosecutions relating to such offences; and
b)in so far as access to the data is not made dependent on a prior review by a court or an independent administrative body whose decision limits access to and use of the data to what is strictly necessary for the purpose of attaining the objective pursued.

I am most certainly not a lawyer, but it seems to me that this means that DRIPA s1 could still be applied for “serious offences” if the retention notices themselves state that in order to access the data, there must be prior review by a court – i.e. a warrant or similar. DRIPA s1(4)(d) seems to allow the secretary of state to quickly update regulations (i.e. secondary legislation, which doesn’t go through parliament for debate etc) to do this as “The Secretary of State may by regulations make further provision … Such provision may… include provision about… access to… data retained by virtue of this section”

For more reading, the judgment can be found here: https://www.judiciary.gov.uk/judgments/david-davis-and-others-v-secretary-of-state-for-the-home-department/

See also the Independent Reviewer of Terrorism Legislations first thoughts on the matter: https://terrorismlegislationreviewer.independent.gov.uk/dripa-2014-s1-declared-unlawful/

Turnout requirements for strikes

The current Tory government has long threatened, and is now enacting, legislation to require that a certain minimum turnout is needed for a strike, and with an even higher level for public sector. Specifically, for non-public sector there would have to be a 50% turnout. For public sector, there is an additional requirement that 40% of eligible members would need to back a strike.

The ostensible reason for this is that a number of strikes over the last decade have occurred with relatively small turnouts. For example, in 2014, the GMB union strike only had 23% turnout, and only 17% of eligible members voted in favour of a strike.

The current rules state that only a majority of actual votes are needed. In the most extreme (and unrealistic case), if a union had a million members, but only 1 person replied to the ballot, and voted for a strike, then all one million members would go on strike. This is obviously absurd. The other extreme of requiring all one million to vote in favour is equally absurd.

The situation as is favours the “noisy majority” – those who are politically active and radical are more likely to vote, and so they are more likely for their voice to be heard, giving their views disproportionate strength. It seems logical to me that there has to be a sensible minimum turnout and/or minimum ‘in favour’ – the question is what is that number?

The current law controlling this is the Trade Union and Labour Relations (Consolidation) Act 1992 and there is a useful Code of Practice for ballots etc. It’s seriously complicated, but very interesting – well worth a read if you’re bored sometime.

One reason for low turnouts is the rules in the law/CoP about how a ballot must take place. The law is very prescriptive about how a ballot takes place, including the format of the ballot, and most importantly that the ballot has to be done on paper, generally sent via first class mail. There are lots of reasons for low turnout due to this – ballots can be lost in the mail, filled out incorrectly, people may be on holiday, or frankly people suck at remembering to post a letter in time etc. I think apathy is the main reason but have no evidence for that.

A simple way to partially meet these concerns – making turnouts higher and thus it more likely that turnout will be significant enough to be the obvious will of the union membership, is to allow electronic voting, ensuring of course that the confidentiality of the secret ballot is maintained, and the integrity of the result. This is a non-trivial, but certainly solvable, problem. Giving unions the option to do electronic ballots is, IMHO, the correct way to go.

IOCCO report on Journalist Sources

The IOCCO yesterday (Feb 4th 2015) released their report [1] on the use of RIPA by police to identify journalistic sources. I had a few thoughts I decided to put down here.

Firstly, the report seems to have been rather rigourous, with some exceptions. The conclusions seem decisive and the recommendations seem sensible. The key conclusion is that “Police forces are not randomly trawling communications data relating to journalists in order to identify their sources.”

As ever, the Interception of Communications Commissioner doesn’t pull its punches, criticising that “the majority of [RIPA] applications did not sufficiently justify the principles of necessity and proportionality” (7.15 and 7.16 of the Report[1]). This lead to conclusions in 8.6 and 8.7, with recommendations in 8.9.

It will be extremely interesting to see if the government responds to these conclusions, either through Primary or Secondary legislation. I wonder if the current Counter-Terrorism and Security Bill [3] may provide an opportunity for this, although as this Government Bill is in Report stage in the Lords, and hence has almost run its course, then it is probably too late – amendments will need to be placed within the next few days.

Organisations outside of scope

It should be noted that possible users of interception warrants beyond the Police forces (see RIPA 2000 6(2)) [2] were not included, as they were out of scope of the investigation by the IOCCO. It’s very unlikely, but not impossible, that the Security Service, SIS, GCHQ, HMRC, or Defence Intelligence, or those in 6(2)(j), would be making RIPA requests which could have been related to journalistic sources.

The Interception of Communications Commisioner may consider including queries regarding journalistic sources within the scope of his annual reporting for all users of interception and communications data warrants, not just the police.

Use after interception

The report was looking for interceptions for investigations which “involve determining if a member of police force or other party have been in contact with a journalist” (Annex B pp. 41 of the Report). Paragraph 4.3 of the report shows how this was a broader remit than just looking at where communications addresses of journalists or their employers were targeted. This is to the IOCCO’s credit.

However, there is a grey area that may not have been covered. Note that it’s possible that a) I’ve misunderstood the law and there is no grey area, b) this was covered by the IOCCO investigation, or c) while the grey area exists, no use is made of it. Indeed, I think (c) to be highly likely when it relates to journalistic sources.

The grey area I refer to is what happens when information of any kind (traffic, subscriber, or service use communications data, or actual intercept) has been acquired under a valid purpose and for a valid reason, and under a valid warrant, not related to journalistic sources. But this information ended up identifying a journalistic source, by ‘accident’ or otherwise, in such a way that it would not fall within the remit of IOCCO’s request in Annex B of their report. Note: I have no reason to believe this is happening, rather this is floated as a “what if?”

I’m differentiating here between purpose (as defined in RIPA 5(3) for interception, and RIPA 22(2) for communications data) and reason. The reason is the specific reason that is entered on the warrant application, e.g. investigation of large scale drug dealing between people A and B.

The grey area relates to the exact meaning of “authorised purposes” in RIPA ss 15.

RIPA 15(3) states that data should be destroyed as soon as it is no longer needed for the authorised purposes, but nowhere is this term defined. If “authorised purposes” means purpose (as defined above), rather than reason, then data intercepted for one reason could be analysed and used for another reason, as long as the other reasons are covered by a purpose. Furthermore, no actual RIPA request is needed for this subsequent analysis. Given this, then RIPA requests which do not in any way relate to journalistic sources, could lead to subsequent analysis and use which does. Thus if the checks for journalistic privilege, or any other privilege, are done at interception rather than analysis, then these checks could be accidentally, or purposefully, circumvented.

Indeed, this has direct analogies in other areas of policing, for example police executing a search warrant for one reason may seize items unrelated to the search warrant if they have reasonable cause. [4]

This is touched upon in paragraph 6.2 of the Interception of Communications Code of Practice[5], but this is essentially just a restatement of the relevant RIPA sections. It is also touched upon in paragraph 8.7 of the IOCCO report, although the report doesn’t address when data was acquired for one reason, but analysed for another.

As an aside, while interception / communications data warrants themselves must be periodically renewed, the intercepted data itself does not need to be – i.e. the data can be retained for as long as it is needed, or “is likely to become” (RIPA 15(4)(a)) necessary, for any of the “authorised purposes”.

For an example of this grey area, let us suppose the police are investigating the leak of sensitive information to a nation state. They make a RIPA request for relevant information, which when analysed identifies the target was in contact with a journalist. The investigating police officer realises that the target was likely the source for a recent embarrassing story by the journalist. The investigation also identifies that the target was not the source of the leak to the nation state.

In the above example the link between journalist and source has been identified, and maybe could be followed up on, by the police despite that the police would not have had sufficient grounds for a RIPA request under Council of Europe Recommendation No R (2000) 7, as described in paragraph 6.41 of the IOCCO report. Furthermore, while Principle 6(b) of that document says that such journalistic source information, irrespective of the purpose (or reason, by my definition) for which it was gained, should not be used as evidence before a court, it says nothing about using the information as the foundation for investigation by the police.

The government should consider defining “authorised purposes” with respect to RIPA, and furthermore should clarify what use can be made of data which has been acquired for a specific purpose and reason.

The IOCCO may wish to consider investigating how common it is that data acquired for one reason is used for a different reason.

References

[1] IOCCO Report: http://www.iocco-uk.info/docs/IOCCO%20Communications%20Data%20Journalist%20Inquiry%20Report%204Feb15.pdf
[2] Interception Warrant users: http://www.legislation.gov.uk/ukpga/2000/23/part/I/chapter/I/crossheading/interception-warrants
[3] Counter-Terrorism and Security Bill: http://services.parliament.uk/bills/2014-15/counterterrorismandsecurity.html
[4] PACE Code B: See section 7, pp 15, for Seizure and retention of property https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/306655/2013_PACE_Code_B.pdf
[5] Interception of Communications Code of Practice: https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/97956/interception-comms-code-practice.pdf

Snooper’s Charter via the back door

The Counter-Terrorism and Security Bill[1] is currently going through the Lords Committee stage[2] of parliamentary scrutiny. The stage allows interested parties to comment and provide feedback on the bill, and a line by line examination of the bill. The general purpose is to tweak and amend the bill such that it is consistent, coherent, and actually meets the stated aims for the bill.

A number of amendments often result from this process. These are generally quite small, technical tweaks to clarify wording or include missing features. What they generally aren’t are massive changes which attempt to re-introduce other bills via the back door. An amendment proposed this week though does just that, attempting to sneak in the much maligned Snooper’s Charter.

Why should I care?

The powers being requested are, in my opinion, over-broad, with insufficient oversight and controls, confusingly drafted in places, and ultimately represent great potential danger to civil liberties. They’ll be expensive to implement, potentially harmful to your data security and privacy, and may not actually make you any safer.

And furthermore the powers are being sneaked in at the eleventh hour, circumventing a lot of parliamentary processes.

Who is moving the amendment?

The following Lords have moved this amendment:-
      Lord King of Bridgwater: Conservative member who served as Secretary of State for Defence, Northern Ireland, and others, under Thatcher. Chaired the Intelligence and Security Select Committee 1994-2001.
      Lord Blair of Boughton: Crossbench (i.e. of no specific party) was previously the Commissioner of the Met Police.
      Lord West of Spithead: Labour member, was Minister for Security and Counter-Terrorism.
      Lord Carlile of Berriew: Liberal Democrat, was the Independent Reviewer of Anti-Terrorism laws, succeeded by David Anderson QC. Was generally deemed ineffectual and pro-establishment when in this post, being in favour of control orders and 42 day detention periods.

These Lords are all ‘establishment’ members, whose backgrounds may imply their being more in favour of security controls rather than civil liberties. Personally I find it inconcievable that the government, and Theresa May MP, were not involved in the production of this amendment.

What is the amendment?

Essentially it’s a reintroduction of the Snooper’s Charter, vastly expanding retention beyond that provided for in the Data Retention and Investigatory Powers Act. For the text, see paragraphs 79-99 of [8].

It allows the Secretary of State to require that telecommunications operators (e.g. ISPs and mobile phone operators) must retain an assortment of data related to communications data for up-to 12 months, and provide the data to certain public authorities when requested. It also allows the Secretary of State to require that telecoms operators use specific techniques, equipment, and systems.

As ever, the devil is in the detail for all these powers and requirements – and there are some serious devils in there. Please see the section “Criticism and Comments” for more information on this.

Why is it an amendment

This is an excellent question, if I do say so myself. The Draft Communications Data Bill (aka Snooper’s Charter) was drafted by the government in 2012 but introduction to parliament was blocked by the Deputy PM Nick Clegg (Lib Dem).

Since then the government rushed through the Data Retention and Investigatory Powers Act 2014, ostensibly to fix data retention notices (from RIPA 2(1)) which had been ruled against by the ECJ. DRIP was very contentious for assorted reasons (see [3],[4]) but was successfully pushed through. A sunset clause of December 2016 was included, and it is expected that the whole subject of data retention and interception will be re-examined early next parliament.

So, the government couldn’t pass the Draft Communications Data Bill due to the Lib Dems blocking it, and couldn’t do too much in the Data Retention and Investigatory Powers Bill as that was emergency legislation and was controversial enough as it was. Theresa May has repeatedly asserted that she wants to pass the Communications Data Bill, and more recently David Cameron has signaled his renewed support in the light of the terrorist incidents in France (despite the fact that France already has something like the Communications Data Bill, which didn’t stop the attacks).

It seems to me therefore that this is an opportunistic attempt to reintroduce a long-standing policy of the Conservative party, taking advantage of the recent terrorist incidents around the world.

Why now?

As mentioned, the recent events in France and elsewhere provides a veneer of justification and shielding, and allows defenders of the amendment to brand opponents as leaving the UK vulnerable to such attacks, despite the evidence that such assertions are wrong.

Interestingly, during the debates on DRIP, one issue was why the sunset clause was so far in the future, and indeed why DRIP was urgent (it was pushed through in just a few days). The government, and supporters, claimed that there was urgency due to the ECJ ruling, and that the sunset clause date was to allow sufficient consideration of an upcoming review by David Anderson QC (and others, see “Reviews of RIPA and DRIP” in [4]).

“I recognise that a number of Members have suggested that this sunset clause should be at an earlier stage. I say to them that the reason it has been put at the end of 2016 is that we will have a review by David Anderson which will report before the general election.” Theresa May [6]

“If Members think about the processes that we want to go through to ensure a full and proper consideration of the capabilities and powers that are needed to deal with the threat that we face and then about the right legislative framework within which those powers and capabilities would be operated, they will realise that that requires sufficient time for consideration and then for legislation to be put in place. That explains the need for the sunset clause at the end of 2016.” Theresa May [6]

“My feeling is that a great deal of work could be done during those 12 months and a set of recommendations could be made available to an incoming Government in May to June 2015.” Lord Hodgson of Astley Abbots [5]

See also comments by Lord Taylor of Holbeach (Hansard HL Deb, 16 July 2014, c600 and c659)

The question therefore is why include the amendment now, before David Anderson’s review has been completed, and before there has been “sufficient time for consideration”.

To be fair, Lord Hodgson did state that “It is important to remember that the presence of a sunset clause, while it is absolute in its end date, does not mean that legislation could not be considered before that time if a Government decided that they were in a position to present it in Parliament.” [7] But I believe the point still stands – what is the urgency?

Furthermore the amendment has a sunset clause built in of December 2016 – the same as DRIP. So even if passed, this amendment will only survive for less than two years. The amendment allows the Secretary of State to require telecommunications providers to use specific equipment and systems, and provide remuneration, with an estimated cost of £1.8 billion (from the equivalent requirements in the Draft Communications Data Bill). There are also requirements to secure the data and systems sufficiently, and secondary legislation needs to be prepared before all this can happen. Surely therefore there is a significant risk that vast amounts of money and time will be invested into something which will expire, and may not be reintroduced, in less than 2 years time. Maybe the government believes that this money, once spent, would provide additional justification to reintroduce the bill in the future – this amendment playing the egg to the Communications Data Bill’s chicken?

Criticism and Comments

Process and timing

Before commenting on the substance of the amendment, I wanted to comment on the process of using an amendment in House of Lords Committee stage. In short, it’s despicable. The HL Committee stage is one of the last stages for the bill – it has already been through the majority of stages which could have considered and commented on this amendment – the House of Commons Second Reading, Committee and Report stage, and Third Reading stages, and House of Lords Second Reading. The only remaining stages are the House of Lords Third Reading and the final Consideration of amendments.

Sneaking in such a large amendment, which would be large enough to be a separate Bill on its own, at such a late stage doesn’t allow parliament the proper time to consider and comment on the proposed powers. It doesn’t allow proper time for the public and interested parties to review the powers, and communicate with their MPs – in fact all the stages at which an MP would normally propose changes to an amendment have already been passed.

Waiting so long to propose such a large amendment with such an impact on civil liberties can be nothing but an attempt to game the system and sneak in an unpopular policy via the back door.

Blanket retention

The amendment does not specifically require blanket retention, however it does provide for the Secretary of State to issue notices which would result in blanket retention. Conceptually I’m torn on this subject – I can see the usefulness of having long-term records of communications data, which can be queried after the fact, by authorised officials. However it’s also very dangerous having such a large amount of sensitive data collected, and there’s a real danger from the fishing expeditions that can be performed on such data.

Ultimately, the acceptability of such retention is reliant on how securely the data is stored, and the quality of the safeguards and oversight on access to the data by both the authorities and the telecoms operators themselves. Unfortunately this amendment is very weak regarding oversight and safeguards, and provides no limits on what the telecoms operator may themselves do with the data.

On the latter point, retention is normally governed by the Data Retention (EC Directive) Regulations 2009, implementing Directive 2006/24/EC of the EU Parliament, together with the Data Protection Act 1998 (DPA). I am assuming that the telecoms operators will not be allowed to use data retained due to this amendment, for their own purposes no related to the amendment. Doing so would be contrary to Data protection principle #2 “Personal data shall be obtained only for one or more specified and lawful purposes, and shall not be further processed in any manner incompatible with that purpose or those purposes.” of the DPA.

It should be noted that Communications Data could be “Sensitive personal data” as defined in the DPA, for example information that a user is using Grindr would classify as sensitive personal data under subsection (2)(f) “personal data consisting as to […] his sexual life”. As such any processing done with that data in accordance with Schedule 3 of the DPA [15] – I think section 7 of that schedule allows this processing, but I’m not sure.

Amendment – Terms

It will be useful to be familiar with certain terms – described below. References to the amendment will be to the PDF of the amendments [9]. Note that I’m not covering the parts relating to postal services.

  • Communications Data: The set of all traffic data, use data, and subscriber data. Defined in pp14 section 1.
  • Authorisation Data: Communications data which is data obtained in order to gain authorisation to obtain communications data. This is defined under “Filtering arrangements”, wherein communications data can be obtained and processed without an authorisation, in order to provide evidence for an authorisation to be sought. Defined on pp 11 subsection (1).
  • Traffic Data: Data to do with the addressing, protocols, timestamps, and related information. See “Traffic Data” for some comments. Defined on pp17 subsections (2), (3).
  • Use Data: Data about how, when, where, etc a user uses the telecommunications service. Explicitly doesn’t include contents of the communication. Defined on pp17 subsection (4).
  • Subscriber Data: Information held by the telecoms service provider which isn’t Use Data or Traffic Data, about the user of the telecoms service. Defined on pp17 subsection (5).
  • Part 3B Data: Seems to be another word for Communications Data, but maybe specifically just the communications data which is being obtained/requested by a public authority. Defined pp 6 section 1.
  • Interception: Has the same meaning as in RIPA (sections 2 and 81), but see “Interception” below.
  • Relevant public authority: The police (and similar), National Crime Agency, and intelligence services. Defined on pp12.
  • Technical Oversight Board: Board established by section 13 of RIPA, which “shall consider the technical requirements and the financial consequences, for the person making the reference, of the notice referred to them and shall report their conclusionts on those matters to that person and to the Secretary of State” RIPA 12(6)(b) [11]

Traffic Data

The Traffic Data, defined on pp 17, may be extremely broad. I believe it may include data that would traditionally be considered content, with subsections (2)(a) and (2)(b)(v) especially broad.

Subsection (3) is one of the most opaque sentences I’ve ever read – I still don’t know what it means or is trying to say: “Data identifying a computer file or computer program access to which is obtained, or which is run, by means of the communication is not “traffic data” except to the extent that the file or program is identified by reference to the apparatus in which it is stored.”

Retention Period

By default data will need to be retained for 12 months ((Period for which data is to be retained) pp 3), but optionally may be shorter if the Secretary of State so desires. However, this can be extended indefinitely if a public authority informs the telecoms provider that the data is or may be required for the purpose of legal proceedings.

Given that all data may be required, then this could result in public authorities requiring permanent storage of data. Furthermore the clause doesn’t specify that only the subset of data which is needed, should be retained. For example, if there’s possibly legal proceedings regarding subscriber X, and an extension is needed, should only user X’s data be retained beyond the 12 months, or all data.

Subsection (4) does require that a public authority inform the telecoms provider as soon as reasonably practicable when the data is no longer needed, which may be a sufficient safeguard against indefinite storage of all or most data.

One question I have is why the data needs to be retained after it has been provided to the public authority. The only reason I can think of is if the defence in legal proceedings is entitled to access to the data direct from the telecoms provider – nothing in the amendment directly allows for this, although there is the standard “otherwise as authorised by law” ((Access to data) subsection (1)(b) on pp 4).

Authorisation for Test Purposes

In addition to being able to get authorisation to communications data for specific investigations and purposes, subsection (1)(b)(ii) of (Authorisations by police and other relevant public authorities) on pp 6 allows authorisation to be given for “the purposes of testing, maintaining or developing equipment, systems or other capabilities”.

While I can see the need for access to live data in order to test equipment, this should very much be the exception rather than the rule. This subsection is the only mention of such authorisation or use for test purposes, and there are no additional safeguards to ensure this is a rare event and that privacy and proportionality is considered. For example, while I can understand if my subscriber data is accessed in pursuance of an investigation into some criminal behaviour, I would be incensed if it is accessed without my knowledge to test some equipment, especially as such testing may take several weeks and lead to a protracted attack on my privacy.

Interception

Subsection (4) of (Power to ensure or facilitate availability of data) on pp2 states that “Nothing in this Part authorises any conduct consisting in the interception of communications in the course of their transmission by means of a telecommunication system.” This is further restated in (Authorisations by police and other relevant public authorities) subsection (5)(a) on pp7. Interception is defined according to sections (2) and (81) of RIPA.

Interception normally would require a RIPA section 8(1) warrant. However, as stated in a witness statement [13] by Charles Farr of the Home Office, communications which terminate or originate outside the UK only need the very broad 8(4) warrant.

In the appeal between Coulson/Kuttner v Regina [12], the Lord Chief Justice ruled that despite court rulings such as R v E [14], where the court said that “”interception” denotes some interference or abstraction of the signal, whether it is passing along wires or by wireless telegraphy, during the process of transmission.” (para 20) that listing to voicemails stored on a server still counts as interception. Thus the courts seem to think that even temporary caching and storing in intermediary servers still counts as transmission, and hence accessing these would count as “interception”.

In that appeal, the Crown submitted that “The Crown does not maintain that the course of transmission necessarily includes all periods during which the transmission system stores the communication. However, it does submit that it does apply to those periods when the system is used for storage ‘in a manner that enables the intended recipient to collect it or otherwise have access to it’.” (para 11)

The question remains from the Crown contention – what “periods during which the transmission system stores the communication” do not count as the “course of transmission” and hence access to would not count as interception?

Furthermore, while subsection (4) of the amendment doesn’t authorise interception, neither does the amendment disallow interception. How, therefore, do the requirements for retention in subsection (3)(b) tally with a RIPA 8(4) warrant. Can a (3)(b) requirement in a retention notice be used to facilitate access to data under a RIPA 8(4) warrant?

Filtering Arrangements

Several pages of the amendment deal with “Filtering arrangements” – see pages 9-13. Even after having read these sections several times I’m still not sure what exactly they mean. But if they mean what I think they mean – the ability to go fishing for data without any warrant or per-case authorisation being needed – then I’m not happy at all.

(Filtering arrangements for obtaining data) subsection (2) states that these “filtering arrangements” may “involve the obtaining of Part 3B data in pursuance of authorisation – i.e. obtaining communications data, in order to get authorisation to get communications data. The data will be obtained (subsection (2)(b)(i)), processed ((2)(b)(ii)) then disclosed to a designated senior officer ((2)(b)(iii)).

Now this may mean that a designated senior officer ((1)(a)) may be able to do a limited query to verify whether a request for authorisation is valid. For example, a police force requests authorisation to request details about subscriber X for IP address Y, so a designated senior officer does a quick check by querying the subscriber data for IP address Y, to verify that it does belong to subscriber X. This appears to be a use of the filtering arrangements on pp 9/10 (Use of filtering arrangements in pursuance of an authorisation). If this is the purpose for the section then I can see the usefulness of it, as long as it is secure and limited, and has good oversight.

It may however mean that a designated officer can grep for specific information, for example all subscribers which are using Tor, and use this as justification to provide authorisation against these subscribers. If this is the purpose, then I’m very much not happy. This sort of fishing trip when there’s no definitive evidence of a crime having happened or being planned, is a big no-no.

As drafted, I honestly don’t know what the purpose or mechanism for these “filtering arrangements” is. This whole set of clauses needs to be reworked to be more precise IMHO.

As an aside, some parts of these sections seem to imply that the Secretary of State themselves must do the querying etc.

Requirements on Telecoms Service Providers

The Secretary of State can impose an assortment of requirements on telecoms operators when serving them with a retention notice. These are defined on pp 2 (Power to ensure or facilitate availability of data) subsection (3), as part of an under under subsection (2)(b).

Also under (2)(b) the Secretary of State can impose ‘restrictions’. What ‘restrictions’ may be imposed is not defined.

The most critical of the requirements is that the secretary of state can mandate that telecoms operators must “acquire, use or maintain specified equipment or systems” (subsection (3)(b)(ii)).

Essentially the government can order telecoms providers to put a black box on their network, which may provide the government a back door into their system. The telecom provider may not know what the box does, and may not be allowed to test it. The government can just say “trust us” and the telecoms operator must accept it. The government is also not liable for any losses if the black box goes wrong.

While the box cannot be used for “any conduct consisting in the interception of communications in the course of their transmissions” (subsection (4)), the actual definition of “interception” is rather fluffy – as discussed in the “Interception” section above.

If I was a telecoms operator I would be extremely unhappy with this, and as a user of such services I’m not comfortable either.

Confidentiality

It’s interesting to note that nowhere in the amendment is there a requirement for the telecoms provider to maintain the confidentiality of any request(s) for data by public authorities. So a telecoms provider could a) tell the subject of such a request that the police have asked for their data, b) provide summary information to the public about how many such requests there have been, and/or c) detail publicly what information they collect and retain and so what information relevant public authorities could query for.

It’s possible that such a requirement of confidentiality may be raised according to (Power to ensure or facilitate availability of data) (subsection (3)), but I’m not sure this is covered in that section. Or confidentiality may be deemed a restriction, according to subsection (2)(b) – the allowed scope of such restrictions isn’t defined anywhere.

Personally I’m a fan of transparency where possible – I think ISPs should report what data they’re retaining, and provide summary information on what is being requested (such as # of users per year) – although this can and should also be reported by the IOCCO or similar – but I can also understand why they should not be allowed to tell their customers that they specifically are being targetted.

Oversight

Speaking of the IOCCO, the subject of oversight is incompletely covered – specifically it is only covered where it relates to “Filtering Arrangements”.

The Secretary of State is required to give the Interception of Communications Commissioner certain information (pp 9, (Filtering arrangements for obtaining data) subsection (4)), provide an annual report (pp 11, (Duties in connection with operation of filtering arrangements) subsection (5)(b)) and report any significant contravention of the rules (subsection (7)). Whether the annual report will provide sufficient information for the IOCCO, I don’t know, but at least the subsection (4) requirements seem sufficient for the IOCCO.

There is not, however, any discussion of judicial oversight, appeals, or complaints other than by the telecoms provider, for retention orders or ‘Part 3B’ requests for the retained data. The IOCCO does not appear to have the power to investigate complaints nor impose penalties as the data retention from the amendment doesn’t derive from a RIPA warrant. It’s possible that other bodies may be able to investigate complaints by citizens, but this isn’t specifically called out – the situation is very complex as shown by the Surveillance Roadmap [10] (I especially recommend the table toward the back).

Telecoms providers can refer the retention notice to the “Technical Oversight Board” but they’re only providing oversight on the technical requirements and financial consequences (subsection (6)(b) of [11]), not the legality etc of the request. Furthermore, the Secretary of State can ignore the feedback from the Technical Oversight Board, and once ignored the subject cannot be referred again to the Technical Oversight Board.

There is also a requirement for the Secretary of State to consult OFCOM, the Technical Advisory Board, and the telecoms providers, before issuing a retention notice (pp 2 (Consultation Requirements)), but what a consultation means isn’t defined, nor is there any requirement for the Secretary of State to actually pay any attention to any feedback from such consultation, nor that such consultation should be public.

There are at least two stages where safeguards should apply, retention notices from the Secretary of State, and authorisation for and the obtaining of data by relevant public authorities of data that has been retained. Currently there is a requirement for the former to be “in writing” (pp 4 (Other Safeguards) subsection (1)(a)). For the latter, authorisation must be documented as described in pp 7 (Form of authorisation and authorised notices).

It should be noted though that the amendment doesn’t say who, if anyone, can review or comment upon any of this documentation.

So, in summary, the oversight in this amendment is not fit for purpose.

Part 3B requests against People

Normally it would be expected that telecoms operators would be the recipients of both retention notices, and requests for communications data (Part 3B data) which has been retained. However, (Authorisations by police and other relevant public authorities) subsections (3)(b) and (3)(c) allow for the latter to be served against individuals – “any person whom the authorised officer believes is, or may be in possession or Part 3B data” or “is capable of obtaining it”. So, rather than serving the notice against an ISP who would have a legal team to investigate the legality of the request, and may fight it in the courts if they desire, an authorised officer could serve it against one of the people who work as a system administrator at the ISP.

That seems dangerous to me – there are undoubtedly reasons why an individual rather than a company may need to be served, but this is ripe for misuse, especially if such a notice can have any such confidentiality clause, such that the individual may be required ((Duties of telecommunications operators in relation to authorisations) subsection (2), pp 8) to provide such data without the knowledge or permission of their employer.

Liability and Compensation

People acting in accordance with Part 3A (i.e. retention notices) are protected from any civil liability according to (Enforcement and protection for compliance) subsection (4), pp 5. There does not, however, seem to be any such protection for Part 3B (i.e. public authorities obtaining data). Furthermore given that there is an obligation in Part 3A (Data security and integrity) on pp 3 to secure the data, I do wonder if such protection from civil liability would exist if, for example, a user’s communication data was stolen due to security shortcomings in their system.

Furthermore, who would be liable for civil suit if data was stolen from equipment, or due to standards or practices, which the Secretary of State has mandated ((Power to ensure or facilitate availability of data) subsection (3)(b).

This issue of liability needs further clarification.

(“Operators” costs of compliance with Parts 3A and 3B) states that the government must recompense operators for the costs incurred, or likely to be incurred, to do with this amendment. The amendment obviously doesn’t estimate how much this may cost HMG, but it should be noted that estimates for the Draft Communications Data Bill were £1.8 billion.

Part 3C

There is no Part 3C. However it’s mention on pages 2, 13, 14, and 18. I wonder what it was, and why it’s missing.

Obviously this is a well drafted amendment…

Conclusions

This amendment is a shocking attempt to circumvent opportunities for comment and railroad an unpopular policy through parliament. This is just the latest in a series of such attempts by the government.

The amendment is badly drafted and is confusing. It solves a problem that doesn’t exist – retention is already required by DRIP. There is absolutely insufficient oversight and no judicial involvement, with no way for individuals or telecoms companies to complain.

References

[1] Counter Terrorism and Security Bill homepage
[2] House of Lords Committee stage
[3] DRIP Introduction (Blog)
[4] Update on DRIP (Blog)
[5] Hansard HL Deb, 17 July 2014, c726
[6] Hansard HC Deb, 15 July 2014, c714
[7] Hansard HL Deb, 17 July 2014, c736
[8] Counter-Terrorism and Security Bill, Amendments (HTML) (Note: Different order to PDF)
[9] Counter-Terrorism and Security Bill, Amendments (PDF)
[10] Surveillance Roadmap
[11] RIPA 200 Section 12
[12] Coulson v R Appeal
[13] Charles Farr Witness Statement
[14] Regina v E appeal
[15] Data Protection Act Schedule 3

HoloLens – Some analysis

22/1/15 11:00 Updated with specs from [6], [7], [8], [9], a comment on resolution vs FOV, and an update on the HPU location from [12].

HoloLens blows me away with its possibilities. I love my Oculus Rift DK2, and Virtual Reality is perfect for when you want to just concentrate on the computer world, but I’ve always been keen to see a good Augmented Reality solution. HoloLens may be it – check it out at [5].

There had been rumours of MS working on something like this for a while – for example patent applications have been filed. [1][2] But no-one seemed to expect such a mature offering to be announced already, and with a possible early release in July 2015, with wider availability Autumn 2015 when Windows 10 is released. If the HoloLens, with the Windows 10 Holographic UI, deliver as announced then I’ll be buying.

Speaking of which, for all Microsoft’s talk of “Hologram” this and “Hologram” that, as far as I can see no holograms are being used. Instead, “Hologram” here is MS marketing speak for Augmented Reality. Their use of the word is innaccurate and misleading, but frankly is also more understandable to normal consumers, so it’s entirely understandable.

Figure 1: HoloLens

With that out of the way, here’s a bit of analysis of the HoloLens and the Windows Holographic UI. Note that I haven’t seen or touched one of these in person, so take everything with a big pinch of salt….

Outputs

There are two sets of output supported – a “Holographic” display, and Spatial Audio.

hololens-side-parts

Display

Display type

The most stand-out feature is the “Holographic” display. This appears to be using an optical HMD with some kind of waveguide combiner. That’s what those thick lenses are. This is also touched on in the MS patent filing [2].

Focal length

An important question is what the focal length is set to? Does it vary? To explain the importance of this let’s do a quick experiment. Put your hand out in front of you. Look at it, and you’ll notice the background gets blurry. Look at the background behind your hand – now your hand gets blurry. That’s because the lenses of your eyes are changing to focus on what you’re looking at.

If the focal length on the display is fixed, then the display will be out of focus some of the time. Looking at write-ups, people appear to have used the display at ranges from 50cm up to several metres – and with no comments about blurry visuals. It appears therefore that the optics are somehow either changing the focal length of the display, or are “flattening” the world at large, so that your eyes don’t need to change focal length between short and long ranges.

Transmissivity

The waveguide is a way to shine light into your eyes, but if the world outside is too bright then you would have problems seeing the display. Therefore the front screen is tinted. A question is how much it is tinted – too little and you won’t be able to see the display in bright conditions, and too much and you won’t be able to see the outside world in darker conditions. It’s possible they’re photochromic and get darker when exposed to bright light.

Dimensions

I’ve attempted to estimate the dimensions of the display, but these should be taken with a massive pinch of salt. See the Maths section below for where I got the numbers from. My estimate is that the display, per eye, is around 5.6cm wide and 4cm high, and are 1-2.1cm away from the users’ eyes. Given that, that equates to approximately 80-120 degrees vertical Field of View, and 100-140 degrees horizontal field of view. If accurate, then that’s pretty impressive, and is broadly on par with the Oculus Rift.

Since the initial presentation, other write-ups have implied my initial estimate was wildly optimistic. [6] asserts 40×22 degrees, whereas [9] provides two estimates of 23 degrees and 44 degrees diagonal. Descriptions state that the display appears to be quite small – much smaller than that of the Oculus Rift DK2.

Resolution

I don’t have any information on the resolution of the display. Microsoft have stated “HD” however that can mean many things – for example is that HD per eye, HD split between the two eyes? It should be noted as well that HD is pretty rubbish resolution for a display with a large field of view – put your face right next to your laptop or tablet screen and see how pixellated things suddenly look. There are some tricks that could be done if eye tracker is being used (see the Eye Tracker section) to greatly improve apparent resolution.

The write-ups I’ve seen implied that resolution wasn’t bad at all, so this will be something to keep an eye on. [6] asserts somewhere between 4Mpx (2.5k) and 8Mpx (4k).

It should be noted that the human eye has around a 0.3-0.6 arc-minute pixel spacing, which equates to 100-200 pixels per degree.[10] The “Retina” display initially touted by Apple was around 53 pixels per degree. [11]

Spatial Audio

The audio aspect of gaming and computers in general has been quite poor for a while now. The standard is stereo, maybe with a subwoofer in a laptop or PC. Stereo can give you some hints about location, but full 5.1 surround sound has been a rarity for most PC users. There are some expensive headphones which support it, but these don’t work properly when you turn your head away from the screen – not ideal with a head-mounted display. It’s notable therefore that HoloLens supports spatial audio right out of the box.

With spatial audio DSPs are used to simulate surround sound, and furthermore it will take into account the direction you’re facing. It’s amazing how useful this is for understanding your surroundings – a lesson that Oculus has also learnt with their latest prototypes of the Oculus Rift.

Reports from the HoloLens imply it’s using some kind of speaker(s) rather than headphones. Questions remain about how directional the sound is (i.e. can other people hear what you’re hearing), how loud the audio is, and how good the fidelity is.

Inputs

The HoloLens appears to be festooned with sensors, which makes sense given that it is supposed to be a largely standalone device.

Outward facing cameras

Either side of the headset are what look like two cameras. Alternately they may be a camera and an LED transmitter, as used by the MS Kinect. Either way, these cameras provide two sets of information to the computer. Firstly they detect the background and must provide a depth map of some kind – likely using similar techniques and APIs to the Kinect. Secondly, they detect hand movement and so are one of the sources of user input.

The background detection is used for ‘pinning’ augmented reality to the real world – when you turn your head you expect items in the virtual world to remain in a fixed location in the real world. That’s really hard to do, and vital to do well. The simplest way to do it is through the use of markers/glyphs – bits of paper with specific patterns that can be easily recognized by the computer. HoloLens is doing this marker-less, which is much harder.Techniques I’ve seen use algorithms such as PTAMM to build a ‘constellation’ of edges and corners, and then lock virtual objects to these.

Reports seem pretty positive about how this works, which is great news. A big question though is how it works in non-ideal lighting – how well does it track when it’s dark/dim or very bright, there’s moving shadows, etc. For example, what if you’re in a dim room with a bright TV running in the background, casting a constantly changing mix of light and dark around the room?

As mentioned, the cameras are also used for hand tracking. The cameras are very wide angle apparently, to be able to watch hands in a wide range of movement, however many questions remain. These include how well the tracking works when hands cross over, become fists, and turn. Some finger tracking must be performed judging by the click movement used in many of the demos – are all fingers tracked? And how is this information made available to developers.

Eye tracker

During some of the demos the demonstrators have said that the HoloLens can tell where you’re “looking” – indeed that is used extensively to interface with the UI. This may be based on just the orientation of the head, or some reports seem to imply that there’s actual eye tracking.

If there is eye tracking, then there’s likely cameras (possibly in that protuberance in the center) tracking where the user’s pupils are. That would be very cool if so, as it provides another valuable interface for user input, but it could also provide even more.

When tracking the pupil, if the display can ‘move’ the display to different parts of the waveguide, then the display could do this to always provide higher resolution display at the location you’re looking at, without having to waste the processing power of having a high resolution over the whole display. Thus you could get an apparently high resolution over a broad field of view, with a display that only actually displays a high resolution over a small field of view.

Also, by analysis of how the pupils have converged, the computer can judge how far away your looking. For example – put your hand out in front of you and focus on one finger. Move the hand towards and away from your face, and you’ll feel your eyes converging as the finger gets closer – watch someone else’s eyes and you’ll see this clearly. If the computer can judge how far away you’re looking then it could change the focal length of the display itself, so that the display still appears in focus. It could also provide this information to any APIs – allowing a program to know for example which object the user is looking at when there’s a stack of semi-transparent objects stacked behind each other.

Microphone

A microphone is built-in, which can be used both for VoIP such as Skype, and also as a source of user input using Cortana or similar. Questions include quality, and directionality – will the microphone pick up background noise?

Positional sensors

The headset obviously detects when you move your head. This could be detected by the cameras, but the latency would likely be too large – Oculus have found latency of 20ms is a target, and anything over 50ms is absolutely unacceptable. Therefore there are likely gyros and accelerometers to quickly detect movement. Gyros drift over time, and while accelerometers can detect movement they become innaccurate when trying to estimate the net movement after several moves. Therefore it’s likely the external cameras are periodically being used to recalibrate these sensors.

Given that this headset is supposed to be standalone, it’s possible the headset also includes GPS and WiFi for geolocation as well.

Bluetooth

I would be amazed if HoloLens doesn’t include Bluetooth support, which would then allow you to use other input devices, most notably a keyboard and mouse. Using a mouse may be more problematic – you need to map a two dimensional movement into a three dimensional world, however mice are vastly more precise for certain things.

Processing unit

One surprise in the launch was that no connection to a PC/laptop was needed. Instead, the HoloLens is supposed to be standalone. That said, all the computing isn’t done in the handset alone. According to [4] there’s also a box you wear around your neck, which contains the processor. Exactly what is done where – in the headset or the box – hasn’t been described, but we can make some educated guesses. And all this is directly related to the new Holographic Processor Unit (HPU).

HPU

Latency is king in VR/AR – head movement and other inputs need to be rapidly digested by the computer and updated on the display. If this takes longer than 50ms, you’re going to feel ill. Using a general-purpose CPU, and graphics processor unit (GPU) this is achievable but not easy. If your CPU is also busy trying to understand the work – tracking hand movements, backgrounds, cameras, etc – then that gets harder.

Therefore the HPU seems to be being used to offload some of this processing – the HPU can combine all the different data inputs and provide them to applications and the CPU as a simple, low bandwidth, data stream. For example, rather than the CPU having to parse a frame from a camera, detect where hands are, then identify finger locations, orientation, etc, the HPU can do all this and supply the CPU with a basic set of co-ordinates for each of the joints in the hands.

Using a specialist ASIC (chip) allows this to be done fast, and in a power-efficient manner. The HPU does a small number of things, but does them very very well.

I mentioned bandwidth a moment ago, and this provides a hint of where the HPU is. Multiple (possibly 4-6) cameras at sufficiently high frame rates result in vast amounts of data being used every second. This could be streamed wirelessly to the control box, but that would require very high frequency wireless which would be wasteful for power. If, however, the HPU is in the headset then it could instead stream the post-processed low-bandwidth data to/from the control box instead.
Where to put the GPU is a harder question – a lot of data needs to be sent to the graphics memory for processing, so it’s likely that the GPU is in the control box, which then wirelessly streams the video output to the headset.

Since my writeup, [12] has come out which states that in the demo/dev unit they used the HPU was actually worn around the neck, with the headset tethered to a PC. It’s unknown what this means for the final, release, version, but it sounds like there’s a lot of miniaturisation and optimisation needed at the moment.

Other computers

While the HoloLens has been designed to be standalone (albeit with the control/processor box around your neck), a big question is whether it will support other control/processor boxes – for example will it be possible to interface HoloLens with a laptop or PC. This would allow power users willing to forego some flexibility of movement (depending on wireless ranges) to use the higher processor/GPU power in their non-portable boxes. This may require some kind of dongle to handle the wireless communication – assuming some non-standard wireless protocols are being used, possibly at a non-standard frequency – e.g. the 24GHz ISM band instead of the 2.4GHz used for WiFi and Bluetooth, or the 5.8GHz used for newer WiFi. My hope is that this will be supported.

Software

Windows 10 Holographic UI

Windows 10 will support HoloLens natively – apparently all UIs will support it. This could actually be a lot simpler to implement than you’d imagine. Currently, each window on Windows has a location (X,Y) and a size(width,height). In a 3D display, the location now has to have a new Z co-ordinate (X,Y,Z), and a rotation around each axis (rX, rY, rZ). That provides sufficient information to display windows in a 3D world. Optionally you could also add warps to allow windows to be curved – that’s just a couple of other variables.

Importantly, all of this can be hidden from applications unless they want the information. An application just paints into a window, which Windows warps/transforms into the world. An application detects user input by mouse clicks in a 2D world, which Windows can provide by finding the intersection between the line of your gaze and the plane of the window.

So most applications should just work.

Furthermore, as the HoloLens will be using Windows 10, maybe it’s more likely that other platforms (e.g. laptops) also running Windows 10 will be able to interface with the headset.

APIs

That said, many developers will be excited to operate in a 3D world, and that’s where the APIs come in. The Kinect libraries were a bit of a pain to work with, so hopefully MS have learnt some lessons there. The key thing will be to provide a couple of different layers of abstraction for developers, to allow devs the flexibility to do what they want, but have MS libraries do the heavy lifting when possible. MS hasn’t a great history of this – with many APIs not providing easy access to lower level abstractions, so this will be something to watch.

It will also be interesting to see how the APIs and Holographic UI interoperate with other head mounted displays such as the Oculus Rift. Hopefully some standards can be defined to allow people to pick and choose their headset – there are some use cases that VR is better for than AR, and vice versa.

Questions

As ever with an announcement like this, there are many questions. However it’s impressive that Microsoft felt the product mature enough to provide journalists with interactive (albeit tightly scripted) demonstrations. Some of the questions, and things to look out for, include:-
– What is the actual resolution, Field of View, and refresh rate?
– Is there really eye tracking?
– How well does the AR tracking work, especially in non-ideal lighting?
– What is the battery life like?
– How well does the Holographic Interface actually work?
– What is the API, and how easy is it code against?
– What is the performance like, playing videos and games for example – given that games are very reliant on powerful GPUs?
– Can the headset be used with other Windows 10 platforms?
– Can other headsets be used with the Windows 10 Holographic UI?
– Patent arsiness: MS has filed several recent patents in this space, are they going to use these against other players in this space, or are they primarily for defensive use?

Some Maths

You may wonder how I came up with the estimate of Field of View. For source material I used several photos, some information on Head Geometry, and a bit of trigonometry.

Figure 2: HoloLens Front Dimensions
Figure 2: Front view – note estimated size in pixels
Figure 3: Worn
Figure 3: Worn view – note alignment with eyes
Figure 4: Side view
Figure 4: Side view – note distance of lenses vs nose

Firstly, by looking at the photos in figures 2, 3, and 4 I estimated the following:-
– The display (per eyes) was around 110×80 pixels
– The display runs horizontally from roughly level with the outside of the eye, and is symmetrical around the pupil when looking dead ahead
– The display is somewhere between halfway between the depression of the nose between the eyes (sellion) and the tip of then nose, and the tip.

From this, we can get the following information, using the 50th percentile for women:-
– Eye width: 5.6cm (#5-#2 in [3], assuming symmetry of the eye around the pupil)
– Screen distance: 1cm to 2.1cm (#12-#11 in [3])

Figure 5: Trigonometry
Figure 5: Trigonometry

Given the 110×80 pixel ratio, that gives a height of around 4cm. Using the simple trig formula from figure 5, where tan X = (A/2)/B we can punch in some numbers.

Horizontally: A = 5.6cm, B=1 to 2.1cm, therefore C=70.3 to 53 degrees
Vertically: A=4cm, B=1 to 2.1cm, therefore C=63.4 to 43.6 degrees

Note that the field of view is 2 x C.

[9] provides a different estimate of the size of the display: “A frame appears in front of me, about the size of a 50-inch HDTV at 10 feet, or perhaps an iPad at half arm’s length.” This results in the following estimates:-
– 50-inch (127cm) (A) at 10 feet (305cm) (B) => C = 11.8 degrees diagonal
– iPad (9.7 inch, 24.6cm) (A) at half arm’s length (60cm/2) (B) => C = 22.3 degrees diagonal

[6] estimates 40×22 degrees, 60Hz 4Mpx(2.5k)/8Mpx(4k)

References
[1] http://en.wikipedia.org/wiki/Optical_head-mounted_display#Microsoft
[2] http://appft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PG01&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.html&r=1&f=G&l=50&s1=%2220120293548%22.PGNR.&OS=DN/20120293548&RS=DN/20120293548
[3] Head Dimensions http://upload.wikimedia.org/wikipedia/commons/6/61/HeadAnthropometry.JPG
[4] http://www.theverge.com/2015/1/21/7868251/microsoft-hololens-hologram-hands-on-experience
[5] http://www.microsoft.com/microsoft-hololens/en-us
[6] http://www.reddit.com/r/DigitalConvergence/comments/2t7ywy/what_we_know_about_microsofts_hololens_announced/
[7] https://forums.oculus.com/viewtopic.php?f=26&t=19490#p238700
[8] http://www.reddit.com/r/oculus/comments/2t74sf/microsoft_announces_windows_holographic_ar/cnwsyny
[9] https://www.yahoo.com/tech/i-try-microsofts-crazy-hololens-108777779824.html
[10] http://en.wikipedia.org/wiki/Visual_acuity#Physiology
[11] http://en.wikipedia.org/wiki/Retina_Display#Models
[12] http://www.cnet.com/uk/news/microsofts-hololens-is-no-joke-my-reality-augmented-with-skype-minecraft/

MINERVA and the NCC Group’s Cyber10k

I have had the idea for a tool to automate, and optimise, threat modelling and related aspects of IT security for a while now. Over my many, many, years in IT security I’ve constantly been astonished how developers often couldn’t answer even quite simple questions about attack surfaces, and so the first several days of a gig would involve just trying to work out how a product works. In my certifications role, I was often tasked with explaining to some government how feature X worked, and why it was secure, and the often dire quality of documentation would regularly mean I’d have to go to the source code for answers. And I’ve lost count of the number of issues I’ve seen in programs and libraries written by SMEs and in the open-source world, that a simple and not-too-painful bit of targeted testing should have found.

A few months back, I resigned from my then employer, planning to take 6+ months off to work on my own projects, of which this is one. Around the same time I heard of the NCC Group’s Cyber10k. I decided to take a punt and enter my attack surface thingy idea, now called MINERVA, into the competition. I figured that if I won, then I’d have a few extra months to work on my projects before I had to get a real job. And irrespective, it would be good to get external validation of my ideas, and possibly also open up a pool of people who may be interested in alpha-testing.

Amazingly, I won!! And development is now proceeding at pace. The idea behind MINERVA has never been to make money from it (although that would be nice), but rather that I think there’s a real need for this tool. The status quo is shockingly poor, and my hope is that MINERVA will help the industry by automating something that is dull and slow to do, yet really useful. That said, what I’m trying to do is hard – I’ve estimated around a 75% chance of succeeding at all, and only 40% that it would meet it’s stated aims. This was recognised independently by the Cyber10k judges, and I’m extremely happy that they decided to take a punt anyway.

Following the Telegraph and NCC Group articles, I thought it would be useful to provide some more details on what MINERVA is, what problems it’s trying to fix, and overall what the design goals are. These have been extracted from my submission to the Cyber10k, albeit with assorted tweaks. When I submitted to the competition this whole product was purely theoretical, and there have unsurprisingly been design changes since then – no plan survives contact with the enemy – which I have noted below.

MINERVA Introduction

MINERVA is a proposed system which would address multiple issues found in today’s resource-constrained IT development environment. This represents the entry for the Cyber10k challenge “Practical cyber security in start-ups and other resource constrained environments”. It also partially addresses some aspects of other challenges.

The system makes the documentation of attack surfaces, and by extension threat models, easier to do by non-experts. It does this by simplified top-down tools, but primarily through the use of bottom-up tools which will attempt to automatically construct attack surface models based on the code written. Using these combination of tools, plus others, the system can correlate between high level design and low level implementation, and highlight areas these do not sync. It can also automatically detect and track changes in the attack surface over time.

By making the system scalable, MINERVA will allow integration of attack surface models from large numbers of components, allowing high level views of the attack surfaces of large systems up-to and including operating systems and mobile devices. Allowing cloud integration enables support even amongst third party and open source components, together with dynamic updating of threats when new vulnerabilities are identified.

This greater knowledge of attack surface can then be used to prompt developers with questions about threat mitigations, and help them consider security issues in development even without the input of security specialists, thus raising the bar for all software development which uses MINERVA. It can further be used by security specialists to identify areas for research, centrally archive the results of threat modelling and architecture reviews, and generally make more efficient use of their limited time.

Problem Definition

Threat modelling has been proven to be an excellent tool in increasing the security of software. It is built into many methodologies, most notably the Microsoft Security Development Lifecycle. A well written threat model, with good coverage, attention made to mitigations, and then with testing of those mitigations, will likely help lead to a relatively secure product – certainly above industry standards.

Unfortunately threat modelling is difficult, and generally requires security specialist involvement, and there is always a shortage of specialists with the correct skills. There have been numerous attempts to make the process simpler to allow developer involvement, and to educate developers, but these have had minimal impact in general – unfortunately developers are also regularly in short supply, are over worked, and so are unwilling to sink time into a process with, to their mind, nebulous benefits.

Indeed, it’s not uncommon for developers to barely document their work at all, let alone create security documentation. As developers move away from traditional waterfall design and development to newer methodologies such as AGILE, this problem is only getting worse, and even if written at some point, they rapidly fall out of date. Even under waterfall methodologies, where design documents exist, it is common for implementation itself to be quite different, and it’s rare for developers to go back and update the design documents. Even when threat modelling is performed, these are often stored in a variety of formats such as Word documents, as well as threat-modelling specific formats such as the Microsoft .tm4 files. These are rarely centrally stored and archived, and so it can be difficult to identify whether a threat model has been created, let alone how well it was written, and whether it was actually used for anything.

Furthermore, products are becoming more complex over time. Threat models are often written for a specific feature or component, but these are rarely linked with others. Assumptions made in one component, such as that another component is performing certain checks on incoming data, are not always verified leading to security vulnerabilities deep within a product. Even if the assumptions were correct initially, this does not mean the assumption will be correct several versions of software later.

Finally, open source and other third party components can lead to complications. These may be updated without the product developer being made aware, and this may be due to security issues. Developers rarely wish to spend time performing threat modelling and the like on code they do not own, and for non-open-source components it may not even be possible to do so due to a lack of product documentation.

Solution Objectives

MINERVA attempts to address the problems described above. Prior to the design itself, it is worthwhile to call out the high-level features the solution should have – what are the objectives of MINERVA.

Attack surface analysis
It must be possible for a user to create, view, and modify an attack surface model. This must include an interface which uses data flow diagrams (DFDs), and also via a text-based interface. Other diagrams such as UML activity diagrams may be supported.
Centralised storage
Attack surface models must be stored in a centralised location. This should be able to be used to provide a holistic view of an entire product. Change tracking must be supported, together with warnings when changes in one model impact assumptions made in another. Attack surface models must be viewable at different levels of granularity, from product/OS down to process or finer grained.
The centralised storage must support authentication and authorization checks. There must be administration, and audit logging.
It should be possible to perform an impact analysis of security issues found in third party bugs.
Distributed storage
It must be possible for different instances of MINERVA to refer to or pull attack surface models from each other, subject to permissions. For example, the attack surface model for a product which uses OpenSSL should be able to just refer to the attack surface model for OpenSSL, stored on a public server, rather than having to re-implement its own version.
This distributed storage must support dependency tracking and versioning, such that the correct versions of attack surface models are used, and also such that a warning can be provided if a security vulnerability is flagged in an external dependency.
External references should support both imports as well as references, allowing use by non-internet-connected instances of MINERVA. Generally the relationships between servers should be pull, rather than push.
Automated input
It must be possible for automated tools to import and modify attack surface models, or parts thereof. These tools should include the scanning of source code, binaries, and before/after scans of systems when a product is installed or run.
The protocol and API for these must be publicly documented and available, to allow third parties to extend the functionality of MINERVA.
Manual Modification and Input
It must be possible for users to manually create, edit, and view attack surface models. Different interfaces may be desired for developers, security specialists, and third party contractors. Threat models must also be editable.
Automated Analysis
It must be possible for automated tools to analyse stored attack surface models. The protocol and APIs for this must be publicly documented.
There must be a tool which takes an attack surface model, and generates a threat model, which a user can then modify. A tool should be able to generate test plans.
Tools must exist which detect changes in the implementation or design, and which identify where the design and implementation of a product differ.
A tool could be provided which would allow the application of design templates, for example Common Criteria Protection Profiles, which would be used to prompt the creation of an attack surface model, and allow exportation of parts of a Common Criteria Security Target.
Workflow fits in with standard methodologies
Where possible, use of MINERVA should fit in with standard development methodologies. For top down waterfall methodologies, the diagrams created within MINERVA should be the same as those used in design documentation – it should be possible to trivially import and export between MINERVA and design documentation. For Agile, this should mean dynamic creation of models based on source code, change tracking, and generation of test plans and the like.
Due to the plethora of design methodologies, this objective can be met if it will be feasible to write tools which provide the appropriate support, and some sample tools may be written for a subset of common methodologies – one top down, and one iterative/Agile – as proofs of concept.

Solution Design

High level design

Architecture

The high level architecture for MINERVA is extremely simple, as shown in Figure 1. A database holds the attack surface models, threat models, and administrative details such as usernames and passwords. Access to the database is mediated by the MINERVA server itself. The server also performs validation of attack surfaces, authentication and authorization, and interpolation between attack surface levels – for example if a tool requests a high level simplified attack surface model, but the server only has a very low level detailed model, then the server will construct the high level model.

Figure 1: High Level Architecture
Figure 1: High Level Architecture

All tools (including the Inter-Service Interface) communicate with the server via the same SOAP over HTTPS interface (Note: Currently using json over HTTPS). An exception may be made for administration, restricting access to only a single port – thus allowing firewalls to restrict access to only an administration host or network. Authentication will initially be against credentials held in the database, however the aim is to allow HTTP(S) authentication, and thus allow Kerberos integration and the like.

The ISI will be used to pull data from remote instances of the MINERVA server. This will use the same protocol and authentication as other tools – it is essentially just another tool connecting to the external server.

Attack Surface Models

The main data stored within the database is the attack surface models. Figure 2 shows the structure of an attack surface model. The database schema will be based around this.

Under the preliminary model of an attack surface model, a solution is made up of a set of networks, appliances (which are situated on networks, and processes (within the appliances), and security domains. A network in this context is a logical group of components, which may or may not be on the same local network. An appliance is the hardware and operating system, although there will initially be an assumption that there is only one operating system on a set of hardware – i.e. virtualisation will initially not be supported. A process relates to an operating system process. In general, a security domain will align with a network, appliance, and/or process. Generally a security domain boundary can be present between processes, and/or between processes/components and assets.

Any time a dataflow crosses a security domain boundary, there is the opportunity to place a filter on either side of the boundary – for example for a network protocol dataflow, this could be a firewall, and for IPC it could be permissions.

A network is made up of Appliances, which contain processes. The network as a whole is deemed to have a set of users – these are abstract users used to differentiate between users with different permissions and capabilities, and in different security domains.

Figure 2: Attack Surface Model
Figure 2: Attack Surface Model

Appliances contain operating systems – these are used to define the allowable set of permissions, capabilities, and the like that a user or process may be given.

Processes have attributes, and are made up of threads. Threads have attributes and are made up of components. Components interact with each other, and with assets and dataflows such as files, IPC, network connections, and user interface. (Note: Currently I’m collapsing all threads in a process into a single thread – this is for simplicity sake).

Generally high level attack surface models are made up of networks, appliances, and optionally processes. Low level models are made up of components, threads, and processes. Of course, at each level there may be abstractions such as grouping several processes or appliances together. This structure is aimed at providing a framework, rather than mandating a format. The underlying database schema will necessarily need to be rather complex to deal with the multitude of different formats of attack surface model which may be designed.

For each asset or interprocess communication method, source and destinations are defined – this may be a many:many relationship. When these are in different security domains, a primary threat vector may be generated for threat modelling. When these do not cross a domain a secondary threat vector may be generated – for example where defence in depth may be involved.

It is planned that development will begin at the Process and below level, higher levels will not be addressed until later in the development process. (Note: Development has proceeded with this plan. There is support for networks etc but I’ve focused on processes and below, as well as the capabilities an OS may have.)

Example Attack Surface Model

We will now explore an example attack surface model, designed from the top down, using MINERVA as an example. Figure 3 shows the highest level architecture, which is made up of only two parts. Each of these would be stored as a separate process set, on an undefined appliance. By having the appliance undefined, this means that the processes may be on the same, or different, appliances. Each process set is in a different security domain, meaning that the SOAP over HTTPS (Note: JSON over HTTPS is actually being used) crosses between security domains and hence represents a threat.

Figure 3: High Level DFD
Figure 3: High level DFD

This high level data flow diagram (DFD) provides structure, but isn’t especially useful of itself. The MINERVA server can be further decomposed as in Figure 4. This takes the MINERVA server process, with a single thread, which contains the components described. It should be noted that the Database here is a component, rather than an asset – assets are specific, whereas components are generic.

Figure 4: MINERVA Server DFD
Figure 4: MINERVA Server DFD

The dotted part of the MINERVA Server DFD can be further decomposed, and made more specific, as shown in Figure 5. When decomposition occurs, stubs will be auto-generated based on the higher level, for example in Figure 5 the Verification and Read stubs are present. Normally the components defined in the higher level DFD would be sketched into the decomposed DFD, such that when the finer grained components are defined in the lower-level DFD then a relationship will be assigned between a higher level component, and the lower level subcomponents (which are just stored as components within the database, with a parent/child relationship).

Also of note in Figure 5 is that a package is defined (SQLite 3) which is a reference to a LIB or DLL – this would be stored as an attribute. A file asset is also defined, in a separate security domain.

Figure 5: Storage Decomposition
Figure 5: Storage decomposition

Where network components are involved, an alternate type of decomposition may be useful – stack based decomposition. MINERVA knows the network stack involved for an expandable set of well-known protocols, such as SOAP over HTTPS in this case. The user may be prompted with the stack as shown in Figure 6 (Original Version) and the user can then break the stack into relevant components. For example, in Figure 6 (Component Decomposed) the Operating System (defined by the Appliance) handles up-to and including TCP. The process then uses OpenSSL (with a specified version) for parsing of SSL, and the Connection Handler subcomponent is used for HTTP and SOAP parsing. Of course, MINERVA may also make a guess about the stack – for example if it knows HTTPS is in use, and also notes that OpenSSL is an included DLL.

Figure 6: Connection Handler Decomposition
Figure 6: Connection Handler Decomposition

The Connection Handler in the decomposed version is a different Connection Handler to that in the Original Version. The system is aware of this because it has different connections – it communicates with OpenSSL rather than ‘Tools’. The name of a component is stored as metadata, rather than it being the identifier.

An alternate method for decomposing is shown in Figure 6 (Alternative Component Decomposed). This doesn’t use the stack decomposition method, but rather appears more as a protocol break. This display may be more appropriate for display when numerous components are shown rather than just the high level Connection Handler, however it will be less common for attack surface model creation by novices. This is an example of how different types of display may be used for different scenarios.

What can be done with this information

When constructing an attack surface model from the top down, the data collected can be used to verify the low level implementation.

Processes

  • What dynamic libraries should be loaded?
  • What files should be opened, and in what mode (r/w/x)?
  • What network connections should be opened/listened for?
  • What IPC methods should be defined, with what permissions?
  • What OS privileges/permissions should the process have?

Net

  • What firewall rules should apply?
  • Similarly, what Intrusion Detection System rules could apply?
  • Should the connection be encrypted? This can be tested for.
  • Should there be authentication? This may be tested for.
  • File

  • What files are opened, and how (exclusive access?, r/w/x)
  • What permissions should any files have (vs the user the process is running as, and vs other users which may need to access the file)
  • Lib and DLL/SO files

  • What versions are in use? These could be used for bug tracking.
  • Import attack surfaces and threat models for these products from other MINERVA servers
  • For a bottom up attack surface model, all the above may be collected, and used to construct the attack surface model. For example a scan may find that ProcessX.exe has:-

  • Network: Listening on tcp/8001
  • File: wibble.db (identified as a SQLite3 Database by tools such as file or TrID)
  • DLL: OpenSSL version a.b.c, importing functions to do with SSL
  • LIB: SQLite version d.e.f (learnt from the build environment)
  • Makeflags: ASLR (-fPIE), -fstack-protector, -FORTIFY_SOURCE, -Wformat, -Wformat-security
  • RunAs: UserX, who has standard user permissions
  • The import tool could take this information, and use this to prompt for the following:-

    • Net
      • What protocol is on tcp/8001?
      • Where are connections from tcp/8001 expected from? What security domains?
    • Files
      • For wibble.db, confirm that it is a SQLite 3 file
      • What assumptions are made about access to the file, which users, apps, etc should have access?
      • What data is stored – is it sensitive?
      • Should it be encrypted? Should it hold data that is encrypted by the app?

    This can all be used to define an attack surface model, with minimal overhead.

    Once an attack surface model has been defined, this could also be used to perform “what if” analyses. For example, what if component X was compromised, and hence the security domain it is in changes?

    Something that may also be attempted would be to take an attack surface model for a product for a given OS, and change the OS. Different operating systems have different privileges, capabilities, and permissions, and MINERVA could help prompt for and define those which should be used for new and different operating systems.

    Design Decisions

    The MINERVA server will be written in C#, due to familiarity with the language, cross platform support, and extensive tooling already existing for it. Tools will be written in whichever languages make sense. C# will be the default choice with native extensions where needed, however the Linux application will likely be Perl due to ease of programming.

    The network protocol used will be HTTPS, as it is a standard and will support all necessary requirements. SOAP may be used over this, again for standards requirements. REST was considered, however the authentication requirements and large payloads mean that SOAP will be the most suitable. This decision may be revisited when development is under way. (Note: Currently using JSON, as it’s vastly easier to code for).

    Authentication will initially be against credentials held in the database, as this will be the easiest mechanism to implement and non-enterprise customers may prefer it. HTTP(S) authentication, against OS/AD credentials, is a stated aim for the future, to facilitate enterprise use.

    The database will be SQLite initially due to ease of use. There are scalability concerns with SQLite however, which may require support of an enterprise grade database in the future. All database operations must therefore go through an abstraction layer in order to ensure that any future changes are as painless as possible. (Note: Doing code-first database development, it was easiest to use MS SQL.)

    The main OS to be targeted for development will be Windows 7 and later, although where possible the design and implementation should be host OS agnostic. v1.0 should also include support from Linux clients/targets, and possibly also Android.

    Minimum Features

    As seen in the high-level design, the majority of the functionality of the solution depends on external tools. The following tools and features are the minimum desired set which must be in place before it can be deemed to be version 1.0.

    Graphical UI for creation of attack surface models diagrammatically
    People are used to drawing attack surfaces, and for simple systems this may still be the easiest way for knowledgeable users to create high level threat models. The Microsoft TM tool has become the de-facto standard for this, and so a similar tool will be needed for MINERVA. This tool would allow the graphical representation, as a data-flow-diagram of attack surface models, for viewing, creation, and modification of attack surfaces at all levels. (Note: Currently just using exports from the MS tool, but there are serious problems when deeper integration is desired. For example, having the ability to right-click on a graphical node, and then automatically scan the associated process/file).
    UI for creation of attack surface models textually
    For larger and more complex attack surfaces, creating attack surface models diagrammatically isn’t necessarily ideal. Furthermore, while a drawing canvas is good for people versed in the creation of these diagrams, for non-security-specialists a text-based input method may be best. This would allow users to list, for example, all the different interfaces, IPC, etc used, and then describe how these are implemented by different components. This would also allow a tool to prompt the user for more information, and make them think in a certain way. (Note: Currently implemented in a datagrid)
    Windows Process Scanner
    One way to identify an attack surface is to scan a running system. This can work in several ways: by analysing a system before and after an application is installed and then comparing these, or by monitoring a processes execution to detect files, IPC, network connections and the like that are created dynamically. Realistically both will need to be used.
    The Microsoft Attack Surface Analyzer performs the former task already – the MINERVA tool will allow the import and parsing of these. There are a number of different tools which provide the latter functionality, however it is most likely that something custom will be written, albeit using commonly known and used techniques to gain the desired information. (Note: Currently using text output from Sysinternals Process Explorer etc – although the plan is to write a more tightly-coupled tool in the future)
    Linux Process Scanner
    This would be the Linux equivalent to the Windows Process Scanner. For version 1.0 it will only include support for a couple of the more common distributions.
    Stupid Source Scanner
    Some components, such as static libraries, cannot be scanned using the previous tools. Therefore a basic tool will need to be written to grep through source code and development environments to try to generate an attack surface. There already exist tools which spider source code, looking for security issues for example, however few of these allow third party plugins or extensions.
    While the preferred solution for this tool will be to extend an existing third party tool, a custom tool may need to be written. The tool would need to be able to handle the following for version 1.0: parsing of Makefiles, MS .VSProj, C/C++, C#, and Java. For version 1.0 the quality of the parsing/spidering will be very basic – essentially grepping for specific APIs, and identifying linked libraries.
    Use may also be made of code annotations for the likes of Lint, and C# Contracts. It should be noted that the aim of the Stupid Source Scanner is not, certainly initially, to be anything like complete, rather it is to get quick and dirty information out of the codebase with minimum involvement of developers.
    Microsoft Threat Model (.tm4) Import Tool
    The Microsoft TM tool is the standard for creation of attack surface diagrams, and using this to create threat models. These are saved in .tm4 files, which are simple XML. As a lot of security-aware enterprises may have already attempted to create threat models using this tool, for a subset of their components, it is vital to be able to import these into MINERVA. For version 1.0, attack surfaces must be imported, however the threat models themselves do not need to be parsed – they can just be stored until support is added with a later version of MINERVA.
    Support for exporting as a .tm4 may also be added, depending on ease.
    Administration Tool
    Any product whose aim is to hold security-sensitive information, and be used by large numbers of users, must support authentication and authorization checks. These in turn require administration. Likewise and administrator must be able to decide which attack surface models, for which components, and to what level of detail, may be shared externally. The administration tool will provide a mechanism to perform these administrative functions, as well as access to logging including security audit logs.
    Inter-Service Interface
    A stated aim of MINERVA is to allow sharing of attack surface models between different instances of MINERVA. Rather than building support for this into the MINERVA server itself, a separate tool is desirable for security reasons as well as to simplify implementation. The ISI will essentially just be another tool, running with its own credentials, and so even if the ISI were to be compromised then that wouldn’t lead to compromise of the server itself.
    Threat Model Generation
    An obvious use for an attack surface is to automate generation of threat models. This tool will perform this generation, and could potentially allow user interaction with the threat models themselves – although this could be implemented as a separate tool.
    Test Plan and Coverage Generation
    Once an attack surface has been designed, test plans and coverage analysis may be an alternate way to convey to developers the same information as a threat model would. This tool could list, for example, the different tests which should be performed to gain assurance that the implementation is secure – for example it may call out the network interfaces to fuzz, the files to try modifying, and the like. By conveying the information in a way that developers are more used to understanding, this may help increase coverage of security-relevant testing – for example many developers do very little ‘negative’ testing, and instead rely on ‘positive’ testing.
    Firewall Exception Generation
    Through analysis of network connections/interfaces, a list of expected firewall rules together with protocols may be generated. This would be of use to customers, where a developer has used MINERVA, for example to know what firewall exceptions to put in together with what network traffic their Network-IDS should be detecting. It will also be useful for developers to detect and enumerate unexpected network connections, for example debug support which have accidentally be left in.

    Development Order

    A proposed order for the development of the minimum features is below. For each feature, support will be added to the main server as needed. The first step will of course be to create the server itself, with backing database and network APIs. Following this, a rough administration tool will be written – this will allow testing of the network APIs. The UI tool for textual creation/representation of models will be next, due to ease of implementation, and to allow further testing of the server, and then threat model import in order to be able to quickly fill in examples. The Stupid Source Scanner, and either the Windows or Linux Process Scanner will be next, to test bottom-up attack surface construction. At this point the solution will in some ways be surpassing what is currently available in the public domain.

    The Inter-Service Interface will next be stood up, to test being able to import/export between MINERVA instances. Writing a Graphical UI for attack surface diagrams will likely be non-trivial, and so until this point the MS TM tool will have been used extensively. However, this is a necessary tool to have, and so it would be written at this point. This is the last tool to input attack surfaces. The generation and analysis tools will finally be written, as these are dependent on a number of previous features.

    So, in summary, the rough order for development will be as follows, although of course there will likely be overlap between all of these.

    1. MINERVA server
    2. Administration Tool
    3. UI for creation of attack surface models textually
    4. Microsoft Threat Model (.tm4) Import Tool
    5. Stupid Source Scanner (Note: I’m doing a v0.1 of the Windows Process Scanner first)
    6. Windows Process Scanner (Note: v0.1 takes the text output from existing tools such as Sysinternals Process Explorer)
    7. Inter-Service Interface
    8. Graphical UI for creation of attack surface models diagrammatically
    9. Linux Process Scanner
    10. Test Plan and Coverage Generation
    11. Firewall Exception Generation
    12. Threat Model Generation

    Stretch Goals

    While the above are the minimum features, there are several other tools and ideas that may prove desirable at some point.
    Binary Analysis
    Parsing of DLL/SO or LIB files to look for interfaces, for example looking at import tables to identify APIs in use.
    Common Criteria Security Target Generation
    Common Criteria relies on a Security Target document, which states how a product meets certain design requirements. This document is very onerous to create, but there are some aspects which could be automated based on attack surface diagrams and mitigations called out in threat models.
    Graphical UI for other diagram types
    The standard diagram type for attack surface diagrams is the dataflow diagram. However, other diagrams may be useful at times, for example UML Class, Package, and Activity diagrams may all be useful at certain levels of attack surface model.
    Taint Tracking
    If there could be some standardisation of attack surface model components, plus the expected contents of files, IPC, and network traffic, then some form of taint tracking may be possible. For example, if a file should be encrypted, and a high-level component is flagged as performing encryption, then the low level analysis could perform source code or dynamic analysis to identify whether encryption APIs are in fact being used. If there was no high-level component which was flagged as performing encryption, then that could be identified as an issue.

    Current Status

    Since submitting to Cyber10k, I have begun actual development of MINERVA – as noted before, winning Cyber10k would be a bonus but I was planning to give it a stab anyway. Of course, finding out I won has added a certain impetus.

    The Minerva server is currently operational, albeit with no concept of user controls (for administration), or data versioning. Nonetheless, attack surface models can be stored and queried, and the combination of separate threat models has been proven – for example when two ‘solutions’ read/write from the same file, or listen/connect to a network connection, then correlation can be performed and arbitrary attack surface models drawn which may include some of each solution.

    I used code-first database design techniques with Visual Studio, in C#, and so have used MS SQL, with JSON as a protocol just because it’s so damned easy. I have an administration client which also allows me to manually add/delete/modify attack surface models via a table-like UI. I can import MS .tm4 files, but this hasn’t been tested with the newest generation of the Microsoft Threat Modelling tool. Export to .TM4 isn’t yet supported either. I’m currently working on using the text/CSV output from existing tools such as dumpbin, and Process Explorer, as a temporary stopgap for proof-of-concept of the Windows Process Scanner. The next steps will be to export threat models, perform a few bits of analysis, and then have a custom-written Windows Process Scanner.

    Fingers crossed, I’m hoping to be alpha-testing in November, with a v1.0 by March 2015 (by which point I may need to get a real job again :( ). Still, things are looking remarkably good at the moment.

    Anyway, I hope this was of some interest to some people. Please feel free to hit me up if you’re interested in alpha- or beta-testing Minerva, or have any other queries – always happy to chat. In addition to the blog, feel free to email me at minerva at ianpeters.net.