# Snooper’s Charter via the back door

The Counter-Terrorism and Security Bill[1] is currently going through the Lords Committee stage[2] of parliamentary scrutiny. The stage allows interested parties to comment and provide feedback on the bill, and a line by line examination of the bill. The general purpose is to tweak and amend the bill such that it is consistent, coherent, and actually meets the stated aims for the bill.

A number of amendments often result from this process. These are generally quite small, technical tweaks to clarify wording or include missing features. What they generally aren’t are massive changes which attempt to re-introduce other bills via the back door. An amendment proposed this week though does just that, attempting to sneak in the much maligned Snooper’s Charter.

## Why should I care?

The powers being requested are, in my opinion, over-broad, with insufficient oversight and controls, confusingly drafted in places, and ultimately represent great potential danger to civil liberties. They’ll be expensive to implement, potentially harmful to your data security and privacy, and may not actually make you any safer.

And furthermore the powers are being sneaked in at the eleventh hour, circumventing a lot of parliamentary processes.

## Who is moving the amendment?

The following Lords have moved this amendment:-
Lord King of Bridgwater: Conservative member who served as Secretary of State for Defence, Northern Ireland, and others, under Thatcher. Chaired the Intelligence and Security Select Committee 1994-2001.
Lord Blair of Boughton: Crossbench (i.e. of no specific party) was previously the Commissioner of the Met Police.
Lord West of Spithead: Labour member, was Minister for Security and Counter-Terrorism.
Lord Carlile of Berriew: Liberal Democrat, was the Independent Reviewer of Anti-Terrorism laws, succeeded by David Anderson QC. Was generally deemed ineffectual and pro-establishment when in this post, being in favour of control orders and 42 day detention periods.

These Lords are all ‘establishment’ members, whose backgrounds may imply their being more in favour of security controls rather than civil liberties. Personally I find it inconcievable that the government, and Theresa May MP, were not involved in the production of this amendment.

## What is the amendment?

Essentially it’s a reintroduction of the Snooper’s Charter, vastly expanding retention beyond that provided for in the Data Retention and Investigatory Powers Act. For the text, see paragraphs 79-99 of [8].

It allows the Secretary of State to require that telecommunications operators (e.g. ISPs and mobile phone operators) must retain an assortment of data related to communications data for up-to 12 months, and provide the data to certain public authorities when requested. It also allows the Secretary of State to require that telecoms operators use specific techniques, equipment, and systems.

As ever, the devil is in the detail for all these powers and requirements – and there are some serious devils in there. Please see the section “Criticism and Comments” for more information on this.

## Why is it an amendment

This is an excellent question, if I do say so myself. The Draft Communications Data Bill (aka Snooper’s Charter) was drafted by the government in 2012 but introduction to parliament was blocked by the Deputy PM Nick Clegg (Lib Dem).

Since then the government rushed through the Data Retention and Investigatory Powers Act 2014, ostensibly to fix data retention notices (from RIPA 2(1)) which had been ruled against by the ECJ. DRIP was very contentious for assorted reasons (see [3],[4]) but was successfully pushed through. A sunset clause of December 2016 was included, and it is expected that the whole subject of data retention and interception will be re-examined early next parliament.

So, the government couldn’t pass the Draft Communications Data Bill due to the Lib Dems blocking it, and couldn’t do too much in the Data Retention and Investigatory Powers Bill as that was emergency legislation and was controversial enough as it was. Theresa May has repeatedly asserted that she wants to pass the Communications Data Bill, and more recently David Cameron has signaled his renewed support in the light of the terrorist incidents in France (despite the fact that France already has something like the Communications Data Bill, which didn’t stop the attacks).

It seems to me therefore that this is an opportunistic attempt to reintroduce a long-standing policy of the Conservative party, taking advantage of the recent terrorist incidents around the world.

## Why now?

As mentioned, the recent events in France and elsewhere provides a veneer of justification and shielding, and allows defenders of the amendment to brand opponents as leaving the UK vulnerable to such attacks, despite the evidence that such assertions are wrong.

Interestingly, during the debates on DRIP, one issue was why the sunset clause was so far in the future, and indeed why DRIP was urgent (it was pushed through in just a few days). The government, and supporters, claimed that there was urgency due to the ECJ ruling, and that the sunset clause date was to allow sufficient consideration of an upcoming review by David Anderson QC (and others, see “Reviews of RIPA and DRIP” in [4]).

“I recognise that a number of Members have suggested that this sunset clause should be at an earlier stage. I say to them that the reason it has been put at the end of 2016 is that we will have a review by David Anderson which will report before the general election.” Theresa May [6]

“If Members think about the processes that we want to go through to ensure a full and proper consideration of the capabilities and powers that are needed to deal with the threat that we face and then about the right legislative framework within which those powers and capabilities would be operated, they will realise that that requires sufficient time for consideration and then for legislation to be put in place. That explains the need for the sunset clause at the end of 2016.” Theresa May [6]

“My feeling is that a great deal of work could be done during those 12 months and a set of recommendations could be made available to an incoming Government in May to June 2015.” Lord Hodgson of Astley Abbots [5]

See also comments by Lord Taylor of Holbeach (Hansard HL Deb, 16 July 2014, c600 and c659)

The question therefore is why include the amendment now, before David Anderson’s review has been completed, and before there has been “sufficient time for consideration”.

To be fair, Lord Hodgson did state that “It is important to remember that the presence of a sunset clause, while it is absolute in its end date, does not mean that legislation could not be considered before that time if a Government decided that they were in a position to present it in Parliament.” [7] But I believe the point still stands – what is the urgency?

Furthermore the amendment has a sunset clause built in of December 2016 – the same as DRIP. So even if passed, this amendment will only survive for less than two years. The amendment allows the Secretary of State to require telecommunications providers to use specific equipment and systems, and provide remuneration, with an estimated cost of £1.8 billion (from the equivalent requirements in the Draft Communications Data Bill). There are also requirements to secure the data and systems sufficiently, and secondary legislation needs to be prepared before all this can happen. Surely therefore there is a significant risk that vast amounts of money and time will be invested into something which will expire, and may not be reintroduced, in less than 2 years time. Maybe the government believes that this money, once spent, would provide additional justification to reintroduce the bill in the future – this amendment playing the egg to the Communications Data Bill’s chicken?

### Process and timing

Before commenting on the substance of the amendment, I wanted to comment on the process of using an amendment in House of Lords Committee stage. In short, it’s despicable. The HL Committee stage is one of the last stages for the bill – it has already been through the majority of stages which could have considered and commented on this amendment – the House of Commons Second Reading, Committee and Report stage, and Third Reading stages, and House of Lords Second Reading. The only remaining stages are the House of Lords Third Reading and the final Consideration of amendments.

Sneaking in such a large amendment, which would be large enough to be a separate Bill on its own, at such a late stage doesn’t allow parliament the proper time to consider and comment on the proposed powers. It doesn’t allow proper time for the public and interested parties to review the powers, and communicate with their MPs – in fact all the stages at which an MP would normally propose changes to an amendment have already been passed.

Waiting so long to propose such a large amendment with such an impact on civil liberties can be nothing but an attempt to game the system and sneak in an unpopular policy via the back door.

### Blanket retention

The amendment does not specifically require blanket retention, however it does provide for the Secretary of State to issue notices which would result in blanket retention. Conceptually I’m torn on this subject – I can see the usefulness of having long-term records of communications data, which can be queried after the fact, by authorised officials. However it’s also very dangerous having such a large amount of sensitive data collected, and there’s a real danger from the fishing expeditions that can be performed on such data.

Ultimately, the acceptability of such retention is reliant on how securely the data is stored, and the quality of the safeguards and oversight on access to the data by both the authorities and the telecoms operators themselves. Unfortunately this amendment is very weak regarding oversight and safeguards, and provides no limits on what the telecoms operator may themselves do with the data.

On the latter point, retention is normally governed by the Data Retention (EC Directive) Regulations 2009, implementing Directive 2006/24/EC of the EU Parliament, together with the Data Protection Act 1998 (DPA). I am assuming that the telecoms operators will not be allowed to use data retained due to this amendment, for their own purposes no related to the amendment. Doing so would be contrary to Data protection principle #2 “Personal data shall be obtained only for one or more specified and lawful purposes, and shall not be further processed in any manner incompatible with that purpose or those purposes.” of the DPA.

It should be noted that Communications Data could be “Sensitive personal data” as defined in the DPA, for example information that a user is using Grindr would classify as sensitive personal data under subsection (2)(f) “personal data consisting as to […] his sexual life”. As such any processing done with that data in accordance with Schedule 3 of the DPA [15] – I think section 7 of that schedule allows this processing, but I’m not sure.

### Amendment – Terms

It will be useful to be familiar with certain terms – described below. References to the amendment will be to the PDF of the amendments [9]. Note that I’m not covering the parts relating to postal services.

• Communications Data: The set of all traffic data, use data, and subscriber data. Defined in pp14 section 1.
• Authorisation Data: Communications data which is data obtained in order to gain authorisation to obtain communications data. This is defined under “Filtering arrangements”, wherein communications data can be obtained and processed without an authorisation, in order to provide evidence for an authorisation to be sought. Defined on pp 11 subsection (1).
• Traffic Data: Data to do with the addressing, protocols, timestamps, and related information. See “Traffic Data” for some comments. Defined on pp17 subsections (2), (3).
• Use Data: Data about how, when, where, etc a user uses the telecommunications service. Explicitly doesn’t include contents of the communication. Defined on pp17 subsection (4).
• Subscriber Data: Information held by the telecoms service provider which isn’t Use Data or Traffic Data, about the user of the telecoms service. Defined on pp17 subsection (5).
• Part 3B Data: Seems to be another word for Communications Data, but maybe specifically just the communications data which is being obtained/requested by a public authority. Defined pp 6 section 1.
• Interception: Has the same meaning as in RIPA (sections 2 and 81), but see “Interception” below.
• Relevant public authority: The police (and similar), National Crime Agency, and intelligence services. Defined on pp12.
• Technical Oversight Board: Board established by section 13 of RIPA, which “shall consider the technical requirements and the financial consequences, for the person making the reference, of the notice referred to them and shall report their conclusionts on those matters to that person and to the Secretary of State” RIPA 12(6)(b) [11]

### Traffic Data

The Traffic Data, defined on pp 17, may be extremely broad. I believe it may include data that would traditionally be considered content, with subsections (2)(a) and (2)(b)(v) especially broad.

Subsection (3) is one of the most opaque sentences I’ve ever read – I still don’t know what it means or is trying to say: “Data identifying a computer file or computer program access to which is obtained, or which is run, by means of the communication is not “traffic data” except to the extent that the file or program is identified by reference to the apparatus in which it is stored.”

### Retention Period

By default data will need to be retained for 12 months ((Period for which data is to be retained) pp 3), but optionally may be shorter if the Secretary of State so desires. However, this can be extended indefinitely if a public authority informs the telecoms provider that the data is or may be required for the purpose of legal proceedings.

Given that all data may be required, then this could result in public authorities requiring permanent storage of data. Furthermore the clause doesn’t specify that only the subset of data which is needed, should be retained. For example, if there’s possibly legal proceedings regarding subscriber X, and an extension is needed, should only user X’s data be retained beyond the 12 months, or all data.

Subsection (4) does require that a public authority inform the telecoms provider as soon as reasonably practicable when the data is no longer needed, which may be a sufficient safeguard against indefinite storage of all or most data.

One question I have is why the data needs to be retained after it has been provided to the public authority. The only reason I can think of is if the defence in legal proceedings is entitled to access to the data direct from the telecoms provider – nothing in the amendment directly allows for this, although there is the standard “otherwise as authorised by law” ((Access to data) subsection (1)(b) on pp 4).

### Authorisation for Test Purposes

In addition to being able to get authorisation to communications data for specific investigations and purposes, subsection (1)(b)(ii) of (Authorisations by police and other relevant public authorities) on pp 6 allows authorisation to be given for “the purposes of testing, maintaining or developing equipment, systems or other capabilities”.

While I can see the need for access to live data in order to test equipment, this should very much be the exception rather than the rule. This subsection is the only mention of such authorisation or use for test purposes, and there are no additional safeguards to ensure this is a rare event and that privacy and proportionality is considered. For example, while I can understand if my subscriber data is accessed in pursuance of an investigation into some criminal behaviour, I would be incensed if it is accessed without my knowledge to test some equipment, especially as such testing may take several weeks and lead to a protracted attack on my privacy.

### Interception

Subsection (4) of (Power to ensure or facilitate availability of data) on pp2 states that “Nothing in this Part authorises any conduct consisting in the interception of communications in the course of their transmission by means of a telecommunication system.” This is further restated in (Authorisations by police and other relevant public authorities) subsection (5)(a) on pp7. Interception is defined according to sections (2) and (81) of RIPA.

Interception normally would require a RIPA section 8(1) warrant. However, as stated in a witness statement [13] by Charles Farr of the Home Office, communications which terminate or originate outside the UK only need the very broad 8(4) warrant.

In the appeal between Coulson/Kuttner v Regina [12], the Lord Chief Justice ruled that despite court rulings such as R v E [14], where the court said that “”interception” denotes some interference or abstraction of the signal, whether it is passing along wires or by wireless telegraphy, during the process of transmission.” (para 20) that listing to voicemails stored on a server still counts as interception. Thus the courts seem to think that even temporary caching and storing in intermediary servers still counts as transmission, and hence accessing these would count as “interception”.

In that appeal, the Crown submitted that “The Crown does not maintain that the course of transmission necessarily includes all periods during which the transmission system stores the communication. However, it does submit that it does apply to those periods when the system is used for storage ‘in a manner that enables the intended recipient to collect it or otherwise have access to it’.” (para 11)

The question remains from the Crown contention – what “periods during which the transmission system stores the communication” do not count as the “course of transmission” and hence access to would not count as interception?

Furthermore, while subsection (4) of the amendment doesn’t authorise interception, neither does the amendment disallow interception. How, therefore, do the requirements for retention in subsection (3)(b) tally with a RIPA 8(4) warrant. Can a (3)(b) requirement in a retention notice be used to facilitate access to data under a RIPA 8(4) warrant?

### Filtering Arrangements

Several pages of the amendment deal with “Filtering arrangements” – see pages 9-13. Even after having read these sections several times I’m still not sure what exactly they mean. But if they mean what I think they mean – the ability to go fishing for data without any warrant or per-case authorisation being needed – then I’m not happy at all.

(Filtering arrangements for obtaining data) subsection (2) states that these “filtering arrangements” may “involve the obtaining of Part 3B data in pursuance of authorisation – i.e. obtaining communications data, in order to get authorisation to get communications data. The data will be obtained (subsection (2)(b)(i)), processed ((2)(b)(ii)) then disclosed to a designated senior officer ((2)(b)(iii)).

Now this may mean that a designated senior officer ((1)(a)) may be able to do a limited query to verify whether a request for authorisation is valid. For example, a police force requests authorisation to request details about subscriber X for IP address Y, so a designated senior officer does a quick check by querying the subscriber data for IP address Y, to verify that it does belong to subscriber X. This appears to be a use of the filtering arrangements on pp 9/10 (Use of filtering arrangements in pursuance of an authorisation). If this is the purpose for the section then I can see the usefulness of it, as long as it is secure and limited, and has good oversight.

It may however mean that a designated officer can grep for specific information, for example all subscribers which are using Tor, and use this as justification to provide authorisation against these subscribers. If this is the purpose, then I’m very much not happy. This sort of fishing trip when there’s no definitive evidence of a crime having happened or being planned, is a big no-no.

As drafted, I honestly don’t know what the purpose or mechanism for these “filtering arrangements” is. This whole set of clauses needs to be reworked to be more precise IMHO.

As an aside, some parts of these sections seem to imply that the Secretary of State themselves must do the querying etc.

### Requirements on Telecoms Service Providers

The Secretary of State can impose an assortment of requirements on telecoms operators when serving them with a retention notice. These are defined on pp 2 (Power to ensure or facilitate availability of data) subsection (3), as part of an under under subsection (2)(b).

Also under (2)(b) the Secretary of State can impose ‘restrictions’. What ‘restrictions’ may be imposed is not defined.

The most critical of the requirements is that the secretary of state can mandate that telecoms operators must “acquire, use or maintain specified equipment or systems” (subsection (3)(b)(ii)).

Essentially the government can order telecoms providers to put a black box on their network, which may provide the government a back door into their system. The telecom provider may not know what the box does, and may not be allowed to test it. The government can just say “trust us” and the telecoms operator must accept it. The government is also not liable for any losses if the black box goes wrong.

While the box cannot be used for “any conduct consisting in the interception of communications in the course of their transmissions” (subsection (4)), the actual definition of “interception” is rather fluffy – as discussed in the “Interception” section above.

If I was a telecoms operator I would be extremely unhappy with this, and as a user of such services I’m not comfortable either.

### Confidentiality

It’s interesting to note that nowhere in the amendment is there a requirement for the telecoms provider to maintain the confidentiality of any request(s) for data by public authorities. So a telecoms provider could a) tell the subject of such a request that the police have asked for their data, b) provide summary information to the public about how many such requests there have been, and/or c) detail publicly what information they collect and retain and so what information relevant public authorities could query for.

It’s possible that such a requirement of confidentiality may be raised according to (Power to ensure or facilitate availability of data) (subsection (3)), but I’m not sure this is covered in that section. Or confidentiality may be deemed a restriction, according to subsection (2)(b) – the allowed scope of such restrictions isn’t defined anywhere.

Personally I’m a fan of transparency where possible – I think ISPs should report what data they’re retaining, and provide summary information on what is being requested (such as # of users per year) – although this can and should also be reported by the IOCCO or similar – but I can also understand why they should not be allowed to tell their customers that they specifically are being targetted.

### Oversight

Speaking of the IOCCO, the subject of oversight is incompletely covered – specifically it is only covered where it relates to “Filtering Arrangements”.

The Secretary of State is required to give the Interception of Communications Commissioner certain information (pp 9, (Filtering arrangements for obtaining data) subsection (4)), provide an annual report (pp 11, (Duties in connection with operation of filtering arrangements) subsection (5)(b)) and report any significant contravention of the rules (subsection (7)). Whether the annual report will provide sufficient information for the IOCCO, I don’t know, but at least the subsection (4) requirements seem sufficient for the IOCCO.

There is not, however, any discussion of judicial oversight, appeals, or complaints other than by the telecoms provider, for retention orders or ‘Part 3B’ requests for the retained data. The IOCCO does not appear to have the power to investigate complaints nor impose penalties as the data retention from the amendment doesn’t derive from a RIPA warrant. It’s possible that other bodies may be able to investigate complaints by citizens, but this isn’t specifically called out – the situation is very complex as shown by the Surveillance Roadmap [10] (I especially recommend the table toward the back).

Telecoms providers can refer the retention notice to the “Technical Oversight Board” but they’re only providing oversight on the technical requirements and financial consequences (subsection (6)(b) of [11]), not the legality etc of the request. Furthermore, the Secretary of State can ignore the feedback from the Technical Oversight Board, and once ignored the subject cannot be referred again to the Technical Oversight Board.

There is also a requirement for the Secretary of State to consult OFCOM, the Technical Advisory Board, and the telecoms providers, before issuing a retention notice (pp 2 (Consultation Requirements)), but what a consultation means isn’t defined, nor is there any requirement for the Secretary of State to actually pay any attention to any feedback from such consultation, nor that such consultation should be public.

There are at least two stages where safeguards should apply, retention notices from the Secretary of State, and authorisation for and the obtaining of data by relevant public authorities of data that has been retained. Currently there is a requirement for the former to be “in writing” (pp 4 (Other Safeguards) subsection (1)(a)). For the latter, authorisation must be documented as described in pp 7 (Form of authorisation and authorised notices).

It should be noted though that the amendment doesn’t say who, if anyone, can review or comment upon any of this documentation.

So, in summary, the oversight in this amendment is not fit for purpose.

### Part 3B requests against People

Normally it would be expected that telecoms operators would be the recipients of both retention notices, and requests for communications data (Part 3B data) which has been retained. However, (Authorisations by police and other relevant public authorities) subsections (3)(b) and (3)(c) allow for the latter to be served against individuals – “any person whom the authorised officer believes is, or may be in possession or Part 3B data” or “is capable of obtaining it”. So, rather than serving the notice against an ISP who would have a legal team to investigate the legality of the request, and may fight it in the courts if they desire, an authorised officer could serve it against one of the people who work as a system administrator at the ISP.

That seems dangerous to me – there are undoubtedly reasons why an individual rather than a company may need to be served, but this is ripe for misuse, especially if such a notice can have any such confidentiality clause, such that the individual may be required ((Duties of telecommunications operators in relation to authorisations) subsection (2), pp 8) to provide such data without the knowledge or permission of their employer.

### Liability and Compensation

People acting in accordance with Part 3A (i.e. retention notices) are protected from any civil liability according to (Enforcement and protection for compliance) subsection (4), pp 5. There does not, however, seem to be any such protection for Part 3B (i.e. public authorities obtaining data). Furthermore given that there is an obligation in Part 3A (Data security and integrity) on pp 3 to secure the data, I do wonder if such protection from civil liability would exist if, for example, a user’s communication data was stolen due to security shortcomings in their system.

Furthermore, who would be liable for civil suit if data was stolen from equipment, or due to standards or practices, which the Secretary of State has mandated ((Power to ensure or facilitate availability of data) subsection (3)(b).

This issue of liability needs further clarification.

(“Operators” costs of compliance with Parts 3A and 3B) states that the government must recompense operators for the costs incurred, or likely to be incurred, to do with this amendment. The amendment obviously doesn’t estimate how much this may cost HMG, but it should be noted that estimates for the Draft Communications Data Bill were £1.8 billion.

### Part 3C

There is no Part 3C. However it’s mention on pages 2, 13, 14, and 18. I wonder what it was, and why it’s missing.

Obviously this is a well drafted amendment…

## Conclusions

This amendment is a shocking attempt to circumvent opportunities for comment and railroad an unpopular policy through parliament. This is just the latest in a series of such attempts by the government.

The amendment is badly drafted and is confusing. It solves a problem that doesn’t exist – retention is already required by DRIP. There is absolutely insufficient oversight and no judicial involvement, with no way for individuals or telecoms companies to complain.

# HoloLens – Some analysis

22/1/15 11:00 Updated with specs from [6], [7], [8], [9], a comment on resolution vs FOV, and an update on the HPU location from [12].

HoloLens blows me away with its possibilities. I love my Oculus Rift DK2, and Virtual Reality is perfect for when you want to just concentrate on the computer world, but I’ve always been keen to see a good Augmented Reality solution. HoloLens may be it – check it out at [5].

There had been rumours of MS working on something like this for a while – for example patent applications have been filed. [1][2] But no-one seemed to expect such a mature offering to be announced already, and with a possible early release in July 2015, with wider availability Autumn 2015 when Windows 10 is released. If the HoloLens, with the Windows 10 Holographic UI, deliver as announced then I’ll be buying.

Speaking of which, for all Microsoft’s talk of “Hologram” this and “Hologram” that, as far as I can see no holograms are being used. Instead, “Hologram” here is MS marketing speak for Augmented Reality. Their use of the word is innaccurate and misleading, but frankly is also more understandable to normal consumers, so it’s entirely understandable.

With that out of the way, here’s a bit of analysis of the HoloLens and the Windows Holographic UI. Note that I haven’t seen or touched one of these in person, so take everything with a big pinch of salt….

## Outputs

There are two sets of output supported – a “Holographic” display, and Spatial Audio.

### Display

#### Display type

The most stand-out feature is the “Holographic” display. This appears to be using an optical HMD with some kind of waveguide combiner. That’s what those thick lenses are. This is also touched on in the MS patent filing [2].

#### Focal length

An important question is what the focal length is set to? Does it vary? To explain the importance of this let’s do a quick experiment. Put your hand out in front of you. Look at it, and you’ll notice the background gets blurry. Look at the background behind your hand – now your hand gets blurry. That’s because the lenses of your eyes are changing to focus on what you’re looking at.

If the focal length on the display is fixed, then the display will be out of focus some of the time. Looking at write-ups, people appear to have used the display at ranges from 50cm up to several metres – and with no comments about blurry visuals. It appears therefore that the optics are somehow either changing the focal length of the display, or are “flattening” the world at large, so that your eyes don’t need to change focal length between short and long ranges.

#### Transmissivity

The waveguide is a way to shine light into your eyes, but if the world outside is too bright then you would have problems seeing the display. Therefore the front screen is tinted. A question is how much it is tinted – too little and you won’t be able to see the display in bright conditions, and too much and you won’t be able to see the outside world in darker conditions. It’s possible they’re photochromic and get darker when exposed to bright light.

#### Dimensions

I’ve attempted to estimate the dimensions of the display, but these should be taken with a massive pinch of salt. See the Maths section below for where I got the numbers from. My estimate is that the display, per eye, is around 5.6cm wide and 4cm high, and are 1-2.1cm away from the users’ eyes. Given that, that equates to approximately 80-120 degrees vertical Field of View, and 100-140 degrees horizontal field of view. If accurate, then that’s pretty impressive, and is broadly on par with the Oculus Rift.

Since the initial presentation, other write-ups have implied my initial estimate was wildly optimistic. [6] asserts 40×22 degrees, whereas [9] provides two estimates of 23 degrees and 44 degrees diagonal. Descriptions state that the display appears to be quite small – much smaller than that of the Oculus Rift DK2.

#### Resolution

I don’t have any information on the resolution of the display. Microsoft have stated “HD” however that can mean many things – for example is that HD per eye, HD split between the two eyes? It should be noted as well that HD is pretty rubbish resolution for a display with a large field of view – put your face right next to your laptop or tablet screen and see how pixellated things suddenly look. There are some tricks that could be done if eye tracker is being used (see the Eye Tracker section) to greatly improve apparent resolution.

The write-ups I’ve seen implied that resolution wasn’t bad at all, so this will be something to keep an eye on. [6] asserts somewhere between 4Mpx (2.5k) and 8Mpx (4k).

It should be noted that the human eye has around a 0.3-0.6 arc-minute pixel spacing, which equates to 100-200 pixels per degree.[10] The “Retina” display initially touted by Apple was around 53 pixels per degree. [11]

### Spatial Audio

The audio aspect of gaming and computers in general has been quite poor for a while now. The standard is stereo, maybe with a subwoofer in a laptop or PC. Stereo can give you some hints about location, but full 5.1 surround sound has been a rarity for most PC users. There are some expensive headphones which support it, but these don’t work properly when you turn your head away from the screen – not ideal with a head-mounted display. It’s notable therefore that HoloLens supports spatial audio right out of the box.

With spatial audio DSPs are used to simulate surround sound, and furthermore it will take into account the direction you’re facing. It’s amazing how useful this is for understanding your surroundings – a lesson that Oculus has also learnt with their latest prototypes of the Oculus Rift.

Reports from the HoloLens imply it’s using some kind of speaker(s) rather than headphones. Questions remain about how directional the sound is (i.e. can other people hear what you’re hearing), how loud the audio is, and how good the fidelity is.

## Inputs

The HoloLens appears to be festooned with sensors, which makes sense given that it is supposed to be a largely standalone device.

### Outward facing cameras

Either side of the headset are what look like two cameras. Alternately they may be a camera and an LED transmitter, as used by the MS Kinect. Either way, these cameras provide two sets of information to the computer. Firstly they detect the background and must provide a depth map of some kind – likely using similar techniques and APIs to the Kinect. Secondly, they detect hand movement and so are one of the sources of user input.

The background detection is used for ‘pinning’ augmented reality to the real world – when you turn your head you expect items in the virtual world to remain in a fixed location in the real world. That’s really hard to do, and vital to do well. The simplest way to do it is through the use of markers/glyphs – bits of paper with specific patterns that can be easily recognized by the computer. HoloLens is doing this marker-less, which is much harder.Techniques I’ve seen use algorithms such as PTAMM to build a ‘constellation’ of edges and corners, and then lock virtual objects to these.

Reports seem pretty positive about how this works, which is great news. A big question though is how it works in non-ideal lighting – how well does it track when it’s dark/dim or very bright, there’s moving shadows, etc. For example, what if you’re in a dim room with a bright TV running in the background, casting a constantly changing mix of light and dark around the room?

As mentioned, the cameras are also used for hand tracking. The cameras are very wide angle apparently, to be able to watch hands in a wide range of movement, however many questions remain. These include how well the tracking works when hands cross over, become fists, and turn. Some finger tracking must be performed judging by the click movement used in many of the demos – are all fingers tracked? And how is this information made available to developers.

### Eye tracker

During some of the demos the demonstrators have said that the HoloLens can tell where you’re “looking” – indeed that is used extensively to interface with the UI. This may be based on just the orientation of the head, or some reports seem to imply that there’s actual eye tracking.

If there is eye tracking, then there’s likely cameras (possibly in that protuberance in the center) tracking where the user’s pupils are. That would be very cool if so, as it provides another valuable interface for user input, but it could also provide even more.

When tracking the pupil, if the display can ‘move’ the display to different parts of the waveguide, then the display could do this to always provide higher resolution display at the location you’re looking at, without having to waste the processing power of having a high resolution over the whole display. Thus you could get an apparently high resolution over a broad field of view, with a display that only actually displays a high resolution over a small field of view.

Also, by analysis of how the pupils have converged, the computer can judge how far away your looking. For example – put your hand out in front of you and focus on one finger. Move the hand towards and away from your face, and you’ll feel your eyes converging as the finger gets closer – watch someone else’s eyes and you’ll see this clearly. If the computer can judge how far away you’re looking then it could change the focal length of the display itself, so that the display still appears in focus. It could also provide this information to any APIs – allowing a program to know for example which object the user is looking at when there’s a stack of semi-transparent objects stacked behind each other.

### Microphone

A microphone is built-in, which can be used both for VoIP such as Skype, and also as a source of user input using Cortana or similar. Questions include quality, and directionality – will the microphone pick up background noise?

### Positional sensors

The headset obviously detects when you move your head. This could be detected by the cameras, but the latency would likely be too large – Oculus have found latency of 20ms is a target, and anything over 50ms is absolutely unacceptable. Therefore there are likely gyros and accelerometers to quickly detect movement. Gyros drift over time, and while accelerometers can detect movement they become innaccurate when trying to estimate the net movement after several moves. Therefore it’s likely the external cameras are periodically being used to recalibrate these sensors.

Given that this headset is supposed to be standalone, it’s possible the headset also includes GPS and WiFi for geolocation as well.

### Bluetooth

I would be amazed if HoloLens doesn’t include Bluetooth support, which would then allow you to use other input devices, most notably a keyboard and mouse. Using a mouse may be more problematic – you need to map a two dimensional movement into a three dimensional world, however mice are vastly more precise for certain things.

## Processing unit

One surprise in the launch was that no connection to a PC/laptop was needed. Instead, the HoloLens is supposed to be standalone. That said, all the computing isn’t done in the handset alone. According to [4] there’s also a box you wear around your neck, which contains the processor. Exactly what is done where – in the headset or the box – hasn’t been described, but we can make some educated guesses. And all this is directly related to the new Holographic Processor Unit (HPU).

### HPU

Latency is king in VR/AR – head movement and other inputs need to be rapidly digested by the computer and updated on the display. If this takes longer than 50ms, you’re going to feel ill. Using a general-purpose CPU, and graphics processor unit (GPU) this is achievable but not easy. If your CPU is also busy trying to understand the work – tracking hand movements, backgrounds, cameras, etc – then that gets harder.

Therefore the HPU seems to be being used to offload some of this processing – the HPU can combine all the different data inputs and provide them to applications and the CPU as a simple, low bandwidth, data stream. For example, rather than the CPU having to parse a frame from a camera, detect where hands are, then identify finger locations, orientation, etc, the HPU can do all this and supply the CPU with a basic set of co-ordinates for each of the joints in the hands.

Using a specialist ASIC (chip) allows this to be done fast, and in a power-efficient manner. The HPU does a small number of things, but does them very very well.

I mentioned bandwidth a moment ago, and this provides a hint of where the HPU is. Multiple (possibly 4-6) cameras at sufficiently high frame rates result in vast amounts of data being used every second. This could be streamed wirelessly to the control box, but that would require very high frequency wireless which would be wasteful for power. If, however, the HPU is in the headset then it could instead stream the post-processed low-bandwidth data to/from the control box instead.
Where to put the GPU is a harder question – a lot of data needs to be sent to the graphics memory for processing, so it’s likely that the GPU is in the control box, which then wirelessly streams the video output to the headset.

Since my writeup, [12] has come out which states that in the demo/dev unit they used the HPU was actually worn around the neck, with the headset tethered to a PC. It’s unknown what this means for the final, release, version, but it sounds like there’s a lot of miniaturisation and optimisation needed at the moment.

### Other computers

While the HoloLens has been designed to be standalone (albeit with the control/processor box around your neck), a big question is whether it will support other control/processor boxes – for example will it be possible to interface HoloLens with a laptop or PC. This would allow power users willing to forego some flexibility of movement (depending on wireless ranges) to use the higher processor/GPU power in their non-portable boxes. This may require some kind of dongle to handle the wireless communication – assuming some non-standard wireless protocols are being used, possibly at a non-standard frequency – e.g. the 24GHz ISM band instead of the 2.4GHz used for WiFi and Bluetooth, or the 5.8GHz used for newer WiFi. My hope is that this will be supported.

## Software

### Windows 10 Holographic UI

Windows 10 will support HoloLens natively – apparently all UIs will support it. This could actually be a lot simpler to implement than you’d imagine. Currently, each window on Windows has a location (X,Y) and a size(width,height). In a 3D display, the location now has to have a new Z co-ordinate (X,Y,Z), and a rotation around each axis (rX, rY, rZ). That provides sufficient information to display windows in a 3D world. Optionally you could also add warps to allow windows to be curved – that’s just a couple of other variables.

Importantly, all of this can be hidden from applications unless they want the information. An application just paints into a window, which Windows warps/transforms into the world. An application detects user input by mouse clicks in a 2D world, which Windows can provide by finding the intersection between the line of your gaze and the plane of the window.

So most applications should just work.

Furthermore, as the HoloLens will be using Windows 10, maybe it’s more likely that other platforms (e.g. laptops) also running Windows 10 will be able to interface with the headset.

### APIs

That said, many developers will be excited to operate in a 3D world, and that’s where the APIs come in. The Kinect libraries were a bit of a pain to work with, so hopefully MS have learnt some lessons there. The key thing will be to provide a couple of different layers of abstraction for developers, to allow devs the flexibility to do what they want, but have MS libraries do the heavy lifting when possible. MS hasn’t a great history of this – with many APIs not providing easy access to lower level abstractions, so this will be something to watch.

It will also be interesting to see how the APIs and Holographic UI interoperate with other head mounted displays such as the Oculus Rift. Hopefully some standards can be defined to allow people to pick and choose their headset – there are some use cases that VR is better for than AR, and vice versa.

## Questions

As ever with an announcement like this, there are many questions. However it’s impressive that Microsoft felt the product mature enough to provide journalists with interactive (albeit tightly scripted) demonstrations. Some of the questions, and things to look out for, include:-
– What is the actual resolution, Field of View, and refresh rate?
– Is there really eye tracking?
– How well does the AR tracking work, especially in non-ideal lighting?
– What is the battery life like?
– How well does the Holographic Interface actually work?
– What is the API, and how easy is it code against?
– What is the performance like, playing videos and games for example – given that games are very reliant on powerful GPUs?
– Can the headset be used with other Windows 10 platforms?
– Can other headsets be used with the Windows 10 Holographic UI?
– Patent arsiness: MS has filed several recent patents in this space, are they going to use these against other players in this space, or are they primarily for defensive use?

## Some Maths

You may wonder how I came up with the estimate of Field of View. For source material I used several photos, some information on Head Geometry, and a bit of trigonometry.

Figure 2: Front view – note estimated size in pixels

Figure 3: Worn view – note alignment with eyes

Figure 4: Side view – note distance of lenses vs nose

Firstly, by looking at the photos in figures 2, 3, and 4 I estimated the following:-
– The display (per eyes) was around 110×80 pixels
– The display runs horizontally from roughly level with the outside of the eye, and is symmetrical around the pupil when looking dead ahead
– The display is somewhere between halfway between the depression of the nose between the eyes (sellion) and the tip of then nose, and the tip.

From this, we can get the following information, using the 50th percentile for women:-
– Eye width: 5.6cm (#5-#2 in [3], assuming symmetry of the eye around the pupil)
– Screen distance: 1cm to 2.1cm (#12-#11 in [3])

Figure 5: Trigonometry

Given the 110×80 pixel ratio, that gives a height of around 4cm. Using the simple trig formula from figure 5, where tan X = (A/2)/B we can punch in some numbers.

Horizontally: A = 5.6cm, B=1 to 2.1cm, therefore C=70.3 to 53 degrees
Vertically: A=4cm, B=1 to 2.1cm, therefore C=63.4 to 43.6 degrees

Note that the field of view is 2 x C.

[9] provides a different estimate of the size of the display: “A frame appears in front of me, about the size of a 50-inch HDTV at 10 feet, or perhaps an iPad at half arm’s length.” This results in the following estimates:-
– 50-inch (127cm) (A) at 10 feet (305cm) (B) => C = 11.8 degrees diagonal
– iPad (9.7 inch, 24.6cm) (A) at half arm’s length (60cm/2) (B) => C = 22.3 degrees diagonal

[6] estimates 40×22 degrees, 60Hz 4Mpx(2.5k)/8Mpx(4k)

# MINERVA and the NCC Group’s Cyber10k

I have had the idea for a tool to automate, and optimise, threat modelling and related aspects of IT security for a while now. Over my many, many, years in IT security I’ve constantly been astonished how developers often couldn’t answer even quite simple questions about attack surfaces, and so the first several days of a gig would involve just trying to work out how a product works. In my certifications role, I was often tasked with explaining to some government how feature X worked, and why it was secure, and the often dire quality of documentation would regularly mean I’d have to go to the source code for answers. And I’ve lost count of the number of issues I’ve seen in programs and libraries written by SMEs and in the open-source world, that a simple and not-too-painful bit of targeted testing should have found.

A few months back, I resigned from my then employer, planning to take 6+ months off to work on my own projects, of which this is one. Around the same time I heard of the NCC Group’s Cyber10k. I decided to take a punt and enter my attack surface thingy idea, now called MINERVA, into the competition. I figured that if I won, then I’d have a few extra months to work on my projects before I had to get a real job. And irrespective, it would be good to get external validation of my ideas, and possibly also open up a pool of people who may be interested in alpha-testing.

Amazingly, I won!! And development is now proceeding at pace. The idea behind MINERVA has never been to make money from it (although that would be nice), but rather that I think there’s a real need for this tool. The status quo is shockingly poor, and my hope is that MINERVA will help the industry by automating something that is dull and slow to do, yet really useful. That said, what I’m trying to do is hard – I’ve estimated around a 75% chance of succeeding at all, and only 40% that it would meet it’s stated aims. This was recognised independently by the Cyber10k judges, and I’m extremely happy that they decided to take a punt anyway.

Following the Telegraph and NCC Group articles, I thought it would be useful to provide some more details on what MINERVA is, what problems it’s trying to fix, and overall what the design goals are. These have been extracted from my submission to the Cyber10k, albeit with assorted tweaks. When I submitted to the competition this whole product was purely theoretical, and there have unsurprisingly been design changes since then – no plan survives contact with the enemy – which I have noted below.

## MINERVA Introduction

MINERVA is a proposed system which would address multiple issues found in today’s resource-constrained IT development environment. This represents the entry for the Cyber10k challenge “Practical cyber security in start-ups and other resource constrained environments”. It also partially addresses some aspects of other challenges.

The system makes the documentation of attack surfaces, and by extension threat models, easier to do by non-experts. It does this by simplified top-down tools, but primarily through the use of bottom-up tools which will attempt to automatically construct attack surface models based on the code written. Using these combination of tools, plus others, the system can correlate between high level design and low level implementation, and highlight areas these do not sync. It can also automatically detect and track changes in the attack surface over time.

By making the system scalable, MINERVA will allow integration of attack surface models from large numbers of components, allowing high level views of the attack surfaces of large systems up-to and including operating systems and mobile devices. Allowing cloud integration enables support even amongst third party and open source components, together with dynamic updating of threats when new vulnerabilities are identified.

This greater knowledge of attack surface can then be used to prompt developers with questions about threat mitigations, and help them consider security issues in development even without the input of security specialists, thus raising the bar for all software development which uses MINERVA. It can further be used by security specialists to identify areas for research, centrally archive the results of threat modelling and architecture reviews, and generally make more efficient use of their limited time.

## Problem Definition

Threat modelling has been proven to be an excellent tool in increasing the security of software. It is built into many methodologies, most notably the Microsoft Security Development Lifecycle. A well written threat model, with good coverage, attention made to mitigations, and then with testing of those mitigations, will likely help lead to a relatively secure product – certainly above industry standards.

Unfortunately threat modelling is difficult, and generally requires security specialist involvement, and there is always a shortage of specialists with the correct skills. There have been numerous attempts to make the process simpler to allow developer involvement, and to educate developers, but these have had minimal impact in general – unfortunately developers are also regularly in short supply, are over worked, and so are unwilling to sink time into a process with, to their mind, nebulous benefits.

Indeed, it’s not uncommon for developers to barely document their work at all, let alone create security documentation. As developers move away from traditional waterfall design and development to newer methodologies such as AGILE, this problem is only getting worse, and even if written at some point, they rapidly fall out of date. Even under waterfall methodologies, where design documents exist, it is common for implementation itself to be quite different, and it’s rare for developers to go back and update the design documents. Even when threat modelling is performed, these are often stored in a variety of formats such as Word documents, as well as threat-modelling specific formats such as the Microsoft .tm4 files. These are rarely centrally stored and archived, and so it can be difficult to identify whether a threat model has been created, let alone how well it was written, and whether it was actually used for anything.

Furthermore, products are becoming more complex over time. Threat models are often written for a specific feature or component, but these are rarely linked with others. Assumptions made in one component, such as that another component is performing certain checks on incoming data, are not always verified leading to security vulnerabilities deep within a product. Even if the assumptions were correct initially, this does not mean the assumption will be correct several versions of software later.

Finally, open source and other third party components can lead to complications. These may be updated without the product developer being made aware, and this may be due to security issues. Developers rarely wish to spend time performing threat modelling and the like on code they do not own, and for non-open-source components it may not even be possible to do so due to a lack of product documentation.

## Solution Objectives

MINERVA attempts to address the problems described above. Prior to the design itself, it is worthwhile to call out the high-level features the solution should have – what are the objectives of MINERVA.

Attack surface analysis
It must be possible for a user to create, view, and modify an attack surface model. This must include an interface which uses data flow diagrams (DFDs), and also via a text-based interface. Other diagrams such as UML activity diagrams may be supported.
Centralised storage
Attack surface models must be stored in a centralised location. This should be able to be used to provide a holistic view of an entire product. Change tracking must be supported, together with warnings when changes in one model impact assumptions made in another. Attack surface models must be viewable at different levels of granularity, from product/OS down to process or finer grained.
The centralised storage must support authentication and authorization checks. There must be administration, and audit logging.
It should be possible to perform an impact analysis of security issues found in third party bugs.
Distributed storage
It must be possible for different instances of MINERVA to refer to or pull attack surface models from each other, subject to permissions. For example, the attack surface model for a product which uses OpenSSL should be able to just refer to the attack surface model for OpenSSL, stored on a public server, rather than having to re-implement its own version.
This distributed storage must support dependency tracking and versioning, such that the correct versions of attack surface models are used, and also such that a warning can be provided if a security vulnerability is flagged in an external dependency.
External references should support both imports as well as references, allowing use by non-internet-connected instances of MINERVA. Generally the relationships between servers should be pull, rather than push.
Automated input
It must be possible for automated tools to import and modify attack surface models, or parts thereof. These tools should include the scanning of source code, binaries, and before/after scans of systems when a product is installed or run.
The protocol and API for these must be publicly documented and available, to allow third parties to extend the functionality of MINERVA.
Manual Modification and Input
It must be possible for users to manually create, edit, and view attack surface models. Different interfaces may be desired for developers, security specialists, and third party contractors. Threat models must also be editable.
Automated Analysis
It must be possible for automated tools to analyse stored attack surface models. The protocol and APIs for this must be publicly documented.
There must be a tool which takes an attack surface model, and generates a threat model, which a user can then modify. A tool should be able to generate test plans.
Tools must exist which detect changes in the implementation or design, and which identify where the design and implementation of a product differ.
A tool could be provided which would allow the application of design templates, for example Common Criteria Protection Profiles, which would be used to prompt the creation of an attack surface model, and allow exportation of parts of a Common Criteria Security Target.
Workflow fits in with standard methodologies
Where possible, use of MINERVA should fit in with standard development methodologies. For top down waterfall methodologies, the diagrams created within MINERVA should be the same as those used in design documentation – it should be possible to trivially import and export between MINERVA and design documentation. For Agile, this should mean dynamic creation of models based on source code, change tracking, and generation of test plans and the like.
Due to the plethora of design methodologies, this objective can be met if it will be feasible to write tools which provide the appropriate support, and some sample tools may be written for a subset of common methodologies – one top down, and one iterative/Agile – as proofs of concept.

## Solution Design

### High level design

#### Architecture

The high level architecture for MINERVA is extremely simple, as shown in Figure 1. A database holds the attack surface models, threat models, and administrative details such as usernames and passwords. Access to the database is mediated by the MINERVA server itself. The server also performs validation of attack surfaces, authentication and authorization, and interpolation between attack surface levels – for example if a tool requests a high level simplified attack surface model, but the server only has a very low level detailed model, then the server will construct the high level model.

Figure 1: High Level Architecture

All tools (including the Inter-Service Interface) communicate with the server via the same SOAP over HTTPS interface (Note: Currently using json over HTTPS). An exception may be made for administration, restricting access to only a single port – thus allowing firewalls to restrict access to only an administration host or network. Authentication will initially be against credentials held in the database, however the aim is to allow HTTP(S) authentication, and thus allow Kerberos integration and the like.

The ISI will be used to pull data from remote instances of the MINERVA server. This will use the same protocol and authentication as other tools – it is essentially just another tool connecting to the external server.

#### Attack Surface Models

The main data stored within the database is the attack surface models. Figure 2 shows the structure of an attack surface model. The database schema will be based around this.

Under the preliminary model of an attack surface model, a solution is made up of a set of networks, appliances (which are situated on networks, and processes (within the appliances), and security domains. A network in this context is a logical group of components, which may or may not be on the same local network. An appliance is the hardware and operating system, although there will initially be an assumption that there is only one operating system on a set of hardware – i.e. virtualisation will initially not be supported. A process relates to an operating system process. In general, a security domain will align with a network, appliance, and/or process. Generally a security domain boundary can be present between processes, and/or between processes/components and assets.

Any time a dataflow crosses a security domain boundary, there is the opportunity to place a filter on either side of the boundary – for example for a network protocol dataflow, this could be a firewall, and for IPC it could be permissions.

A network is made up of Appliances, which contain processes. The network as a whole is deemed to have a set of users – these are abstract users used to differentiate between users with different permissions and capabilities, and in different security domains.

Figure 2: Attack Surface Model

Appliances contain operating systems – these are used to define the allowable set of permissions, capabilities, and the like that a user or process may be given.

Processes have attributes, and are made up of threads. Threads have attributes and are made up of components. Components interact with each other, and with assets and dataflows such as files, IPC, network connections, and user interface. (Note: Currently I’m collapsing all threads in a process into a single thread – this is for simplicity sake).

Generally high level attack surface models are made up of networks, appliances, and optionally processes. Low level models are made up of components, threads, and processes. Of course, at each level there may be abstractions such as grouping several processes or appliances together. This structure is aimed at providing a framework, rather than mandating a format. The underlying database schema will necessarily need to be rather complex to deal with the multitude of different formats of attack surface model which may be designed.

For each asset or interprocess communication method, source and destinations are defined – this may be a many:many relationship. When these are in different security domains, a primary threat vector may be generated for threat modelling. When these do not cross a domain a secondary threat vector may be generated – for example where defence in depth may be involved.

It is planned that development will begin at the Process and below level, higher levels will not be addressed until later in the development process. (Note: Development has proceeded with this plan. There is support for networks etc but I’ve focused on processes and below, as well as the capabilities an OS may have.)

#### Example Attack Surface Model

We will now explore an example attack surface model, designed from the top down, using MINERVA as an example. Figure 3 shows the highest level architecture, which is made up of only two parts. Each of these would be stored as a separate process set, on an undefined appliance. By having the appliance undefined, this means that the processes may be on the same, or different, appliances. Each process set is in a different security domain, meaning that the SOAP over HTTPS (Note: JSON over HTTPS is actually being used) crosses between security domains and hence represents a threat.

Figure 3: High level DFD

This high level data flow diagram (DFD) provides structure, but isn’t especially useful of itself. The MINERVA server can be further decomposed as in Figure 4. This takes the MINERVA server process, with a single thread, which contains the components described. It should be noted that the Database here is a component, rather than an asset – assets are specific, whereas components are generic.

Figure 4: MINERVA Server DFD

The dotted part of the MINERVA Server DFD can be further decomposed, and made more specific, as shown in Figure 5. When decomposition occurs, stubs will be auto-generated based on the higher level, for example in Figure 5 the Verification and Read stubs are present. Normally the components defined in the higher level DFD would be sketched into the decomposed DFD, such that when the finer grained components are defined in the lower-level DFD then a relationship will be assigned between a higher level component, and the lower level subcomponents (which are just stored as components within the database, with a parent/child relationship).

Also of note in Figure 5 is that a package is defined (SQLite 3) which is a reference to a LIB or DLL – this would be stored as an attribute. A file asset is also defined, in a separate security domain.

Figure 5: Storage decomposition

Where network components are involved, an alternate type of decomposition may be useful – stack based decomposition. MINERVA knows the network stack involved for an expandable set of well-known protocols, such as SOAP over HTTPS in this case. The user may be prompted with the stack as shown in Figure 6 (Original Version) and the user can then break the stack into relevant components. For example, in Figure 6 (Component Decomposed) the Operating System (defined by the Appliance) handles up-to and including TCP. The process then uses OpenSSL (with a specified version) for parsing of SSL, and the Connection Handler subcomponent is used for HTTP and SOAP parsing. Of course, MINERVA may also make a guess about the stack – for example if it knows HTTPS is in use, and also notes that OpenSSL is an included DLL.

Figure 6: Connection Handler Decomposition

The Connection Handler in the decomposed version is a different Connection Handler to that in the Original Version. The system is aware of this because it has different connections – it communicates with OpenSSL rather than ‘Tools’. The name of a component is stored as metadata, rather than it being the identifier.

An alternate method for decomposing is shown in Figure 6 (Alternative Component Decomposed). This doesn’t use the stack decomposition method, but rather appears more as a protocol break. This display may be more appropriate for display when numerous components are shown rather than just the high level Connection Handler, however it will be less common for attack surface model creation by novices. This is an example of how different types of display may be used for different scenarios.

#### What can be done with this information

When constructing an attack surface model from the top down, the data collected can be used to verify the low level implementation.

Processes

• What dynamic libraries should be loaded?
• What files should be opened, and in what mode (r/w/x)?
• What network connections should be opened/listened for?
• What IPC methods should be defined, with what permissions?
• What OS privileges/permissions should the process have?

Net

• What firewall rules should apply?
• Similarly, what Intrusion Detection System rules could apply?
• Should the connection be encrypted? This can be tested for.
• Should there be authentication? This may be tested for.
• File

• What files are opened, and how (exclusive access?, r/w/x)
• What permissions should any files have (vs the user the process is running as, and vs other users which may need to access the file)
• Lib and DLL/SO files

• What versions are in use? These could be used for bug tracking.
• Import attack surfaces and threat models for these products from other MINERVA servers
• For a bottom up attack surface model, all the above may be collected, and used to construct the attack surface model. For example a scan may find that ProcessX.exe has:-

• Network: Listening on tcp/8001
• File: wibble.db (identified as a SQLite3 Database by tools such as file or TrID)
• DLL: OpenSSL version a.b.c, importing functions to do with SSL
• LIB: SQLite version d.e.f (learnt from the build environment)
• Makeflags: ASLR (-fPIE), -fstack-protector, -FORTIFY_SOURCE, -Wformat, -Wformat-security
• RunAs: UserX, who has standard user permissions
• The import tool could take this information, and use this to prompt for the following:-

• Net
• What protocol is on tcp/8001?
• Where are connections from tcp/8001 expected from? What security domains?
• Files
• For wibble.db, confirm that it is a SQLite 3 file
• What data is stored – is it sensitive?
• Should it be encrypted? Should it hold data that is encrypted by the app?

This can all be used to define an attack surface model, with minimal overhead.

Once an attack surface model has been defined, this could also be used to perform “what if” analyses. For example, what if component X was compromised, and hence the security domain it is in changes?

Something that may also be attempted would be to take an attack surface model for a product for a given OS, and change the OS. Different operating systems have different privileges, capabilities, and permissions, and MINERVA could help prompt for and define those which should be used for new and different operating systems.

### Design Decisions

The MINERVA server will be written in C#, due to familiarity with the language, cross platform support, and extensive tooling already existing for it. Tools will be written in whichever languages make sense. C# will be the default choice with native extensions where needed, however the Linux application will likely be Perl due to ease of programming.

The network protocol used will be HTTPS, as it is a standard and will support all necessary requirements. SOAP may be used over this, again for standards requirements. REST was considered, however the authentication requirements and large payloads mean that SOAP will be the most suitable. This decision may be revisited when development is under way. (Note: Currently using JSON, as it’s vastly easier to code for).

Authentication will initially be against credentials held in the database, as this will be the easiest mechanism to implement and non-enterprise customers may prefer it. HTTP(S) authentication, against OS/AD credentials, is a stated aim for the future, to facilitate enterprise use.

The database will be SQLite initially due to ease of use. There are scalability concerns with SQLite however, which may require support of an enterprise grade database in the future. All database operations must therefore go through an abstraction layer in order to ensure that any future changes are as painless as possible. (Note: Doing code-first database development, it was easiest to use MS SQL.)

The main OS to be targeted for development will be Windows 7 and later, although where possible the design and implementation should be host OS agnostic. v1.0 should also include support from Linux clients/targets, and possibly also Android.

### Minimum Features

As seen in the high-level design, the majority of the functionality of the solution depends on external tools. The following tools and features are the minimum desired set which must be in place before it can be deemed to be version 1.0.

Graphical UI for creation of attack surface models diagrammatically
People are used to drawing attack surfaces, and for simple systems this may still be the easiest way for knowledgeable users to create high level threat models. The Microsoft TM tool has become the de-facto standard for this, and so a similar tool will be needed for MINERVA. This tool would allow the graphical representation, as a data-flow-diagram of attack surface models, for viewing, creation, and modification of attack surfaces at all levels. (Note: Currently just using exports from the MS tool, but there are serious problems when deeper integration is desired. For example, having the ability to right-click on a graphical node, and then automatically scan the associated process/file).
UI for creation of attack surface models textually
For larger and more complex attack surfaces, creating attack surface models diagrammatically isn’t necessarily ideal. Furthermore, while a drawing canvas is good for people versed in the creation of these diagrams, for non-security-specialists a text-based input method may be best. This would allow users to list, for example, all the different interfaces, IPC, etc used, and then describe how these are implemented by different components. This would also allow a tool to prompt the user for more information, and make them think in a certain way. (Note: Currently implemented in a datagrid)
Windows Process Scanner
One way to identify an attack surface is to scan a running system. This can work in several ways: by analysing a system before and after an application is installed and then comparing these, or by monitoring a processes execution to detect files, IPC, network connections and the like that are created dynamically. Realistically both will need to be used.
The Microsoft Attack Surface Analyzer performs the former task already – the MINERVA tool will allow the import and parsing of these. There are a number of different tools which provide the latter functionality, however it is most likely that something custom will be written, albeit using commonly known and used techniques to gain the desired information. (Note: Currently using text output from Sysinternals Process Explorer etc – although the plan is to write a more tightly-coupled tool in the future)
Linux Process Scanner
This would be the Linux equivalent to the Windows Process Scanner. For version 1.0 it will only include support for a couple of the more common distributions.
Stupid Source Scanner
Some components, such as static libraries, cannot be scanned using the previous tools. Therefore a basic tool will need to be written to grep through source code and development environments to try to generate an attack surface. There already exist tools which spider source code, looking for security issues for example, however few of these allow third party plugins or extensions.
While the preferred solution for this tool will be to extend an existing third party tool, a custom tool may need to be written. The tool would need to be able to handle the following for version 1.0: parsing of Makefiles, MS .VSProj, C/C++, C#, and Java. For version 1.0 the quality of the parsing/spidering will be very basic – essentially grepping for specific APIs, and identifying linked libraries.
Use may also be made of code annotations for the likes of Lint, and C# Contracts. It should be noted that the aim of the Stupid Source Scanner is not, certainly initially, to be anything like complete, rather it is to get quick and dirty information out of the codebase with minimum involvement of developers.
Microsoft Threat Model (.tm4) Import Tool
The Microsoft TM tool is the standard for creation of attack surface diagrams, and using this to create threat models. These are saved in .tm4 files, which are simple XML. As a lot of security-aware enterprises may have already attempted to create threat models using this tool, for a subset of their components, it is vital to be able to import these into MINERVA. For version 1.0, attack surfaces must be imported, however the threat models themselves do not need to be parsed – they can just be stored until support is added with a later version of MINERVA.
Support for exporting as a .tm4 may also be added, depending on ease.
Any product whose aim is to hold security-sensitive information, and be used by large numbers of users, must support authentication and authorization checks. These in turn require administration. Likewise and administrator must be able to decide which attack surface models, for which components, and to what level of detail, may be shared externally. The administration tool will provide a mechanism to perform these administrative functions, as well as access to logging including security audit logs.
Inter-Service Interface
A stated aim of MINERVA is to allow sharing of attack surface models between different instances of MINERVA. Rather than building support for this into the MINERVA server itself, a separate tool is desirable for security reasons as well as to simplify implementation. The ISI will essentially just be another tool, running with its own credentials, and so even if the ISI were to be compromised then that wouldn’t lead to compromise of the server itself.
Threat Model Generation
An obvious use for an attack surface is to automate generation of threat models. This tool will perform this generation, and could potentially allow user interaction with the threat models themselves – although this could be implemented as a separate tool.
Test Plan and Coverage Generation
Once an attack surface has been designed, test plans and coverage analysis may be an alternate way to convey to developers the same information as a threat model would. This tool could list, for example, the different tests which should be performed to gain assurance that the implementation is secure – for example it may call out the network interfaces to fuzz, the files to try modifying, and the like. By conveying the information in a way that developers are more used to understanding, this may help increase coverage of security-relevant testing – for example many developers do very little ‘negative’ testing, and instead rely on ‘positive’ testing.
Firewall Exception Generation
Through analysis of network connections/interfaces, a list of expected firewall rules together with protocols may be generated. This would be of use to customers, where a developer has used MINERVA, for example to know what firewall exceptions to put in together with what network traffic their Network-IDS should be detecting. It will also be useful for developers to detect and enumerate unexpected network connections, for example debug support which have accidentally be left in.

### Development Order

A proposed order for the development of the minimum features is below. For each feature, support will be added to the main server as needed. The first step will of course be to create the server itself, with backing database and network APIs. Following this, a rough administration tool will be written – this will allow testing of the network APIs. The UI tool for textual creation/representation of models will be next, due to ease of implementation, and to allow further testing of the server, and then threat model import in order to be able to quickly fill in examples. The Stupid Source Scanner, and either the Windows or Linux Process Scanner will be next, to test bottom-up attack surface construction. At this point the solution will in some ways be surpassing what is currently available in the public domain.

The Inter-Service Interface will next be stood up, to test being able to import/export between MINERVA instances. Writing a Graphical UI for attack surface diagrams will likely be non-trivial, and so until this point the MS TM tool will have been used extensively. However, this is a necessary tool to have, and so it would be written at this point. This is the last tool to input attack surfaces. The generation and analysis tools will finally be written, as these are dependent on a number of previous features.

So, in summary, the rough order for development will be as follows, although of course there will likely be overlap between all of these.

1. MINERVA server
3. UI for creation of attack surface models textually
4. Microsoft Threat Model (.tm4) Import Tool
5. Stupid Source Scanner (Note: I’m doing a v0.1 of the Windows Process Scanner first)
6. Windows Process Scanner (Note: v0.1 takes the text output from existing tools such as Sysinternals Process Explorer)
7. Inter-Service Interface
8. Graphical UI for creation of attack surface models diagrammatically
9. Linux Process Scanner
10. Test Plan and Coverage Generation
11. Firewall Exception Generation
12. Threat Model Generation

### Stretch Goals

While the above are the minimum features, there are several other tools and ideas that may prove desirable at some point.
Binary Analysis
Parsing of DLL/SO or LIB files to look for interfaces, for example looking at import tables to identify APIs in use.
Common Criteria Security Target Generation
Common Criteria relies on a Security Target document, which states how a product meets certain design requirements. This document is very onerous to create, but there are some aspects which could be automated based on attack surface diagrams and mitigations called out in threat models.
Graphical UI for other diagram types
The standard diagram type for attack surface diagrams is the dataflow diagram. However, other diagrams may be useful at times, for example UML Class, Package, and Activity diagrams may all be useful at certain levels of attack surface model.
Taint Tracking
If there could be some standardisation of attack surface model components, plus the expected contents of files, IPC, and network traffic, then some form of taint tracking may be possible. For example, if a file should be encrypted, and a high-level component is flagged as performing encryption, then the low level analysis could perform source code or dynamic analysis to identify whether encryption APIs are in fact being used. If there was no high-level component which was flagged as performing encryption, then that could be identified as an issue.

## Current Status

Since submitting to Cyber10k, I have begun actual development of MINERVA – as noted before, winning Cyber10k would be a bonus but I was planning to give it a stab anyway. Of course, finding out I won has added a certain impetus.

The Minerva server is currently operational, albeit with no concept of user controls (for administration), or data versioning. Nonetheless, attack surface models can be stored and queried, and the combination of separate threat models has been proven – for example when two ‘solutions’ read/write from the same file, or listen/connect to a network connection, then correlation can be performed and arbitrary attack surface models drawn which may include some of each solution.

I used code-first database design techniques with Visual Studio, in C#, and so have used MS SQL, with JSON as a protocol just because it’s so damned easy. I have an administration client which also allows me to manually add/delete/modify attack surface models via a table-like UI. I can import MS .tm4 files, but this hasn’t been tested with the newest generation of the Microsoft Threat Modelling tool. Export to .TM4 isn’t yet supported either. I’m currently working on using the text/CSV output from existing tools such as dumpbin, and Process Explorer, as a temporary stopgap for proof-of-concept of the Windows Process Scanner. The next steps will be to export threat models, perform a few bits of analysis, and then have a custom-written Windows Process Scanner.

Fingers crossed, I’m hoping to be alpha-testing in November, with a v1.0 by March 2015 (by which point I may need to get a real job again :( ). Still, things are looking remarkably good at the moment.

Anyway, I hope this was of some interest to some people. Please feel free to hit me up if you’re interested in alpha- or beta-testing Minerva, or have any other queries – always happy to chat. In addition to the blog, feel free to email me at minerva at ianpeters.net.

# A bit of Orbital Mechanics

When I read about the failed insertion [1] of several Galileo satellites, I was intrigued whether they would have enough propellant on board to sufficiently alter their orbit. My gut said no, but I decided to do the math to find out. Below is my working – note though that I’m self taught so this may be wrong :) Also note that the numbers I’ve given below are rounded, so there may be rounding errors, and I’ve made numerous simplifying assumptions such as not needing to fix the LAN or Mean anomaly.

The TLDR is that nope, they’re way short (assuming my maths works out). DeltaV requirements are as follows (m/s):-

• Best case deltaV needed: 587+70=657
• Fix inclination: 432
• Fix Perigee: 369
• Fix Inclination+Perigee at the same time: 587
• Fix Apogee: 70
• Fix Apogee+Perigee at the same time: 675
• Best case available deltaV: 250

### Intro to terms

For those not familiar with orbital mechanics etc, I thought I’d define a few terms:-
Delta V (dV): Change in velocity needed/available.
Apogee ($s_a$): The furthest point of an orbit from the earth
Perigee ($s_p$: The closest the orbit gets to the earth
Inclination (i): The angle between (in this case) the rotation of the earth, and the orbit. 0 degrees means the orbit is around the equator, whereas 90 degrees means a polar orbit, going over the n/s poles
Semi-Major Axis (a): All non-escape orbits are ellipses. The Semi-Major Axis is half the distance between extremes of the orbit.
Isp (I): The Isp is a measure of the efficiency of an engine, in units of seconds.
Gravitational Coefficient (G): A constant you need to know
Mass of the earth (M): Another constant you need to know
Radius of the earth (R): Another constant. We’ll assume the earth is a sphere.
x,y,z: X is parallel to earth’s surface, in line with equator, Y is parallel to earth’s surface, towards the north pole. and z is towards the center of the earth.
Eccentricity (e): A measure of how far from a circle the orbit is – 1 is a circle.

A quick aside – all the formulae use SI units (kg, m, s), hence don’t forget to change km to m… And note that the Apogee and Perigee are altitudes above the earth’s surface, so you need to remember to add the radius of the earth a lot of the time.

### dV needed

So, first of all, let’s extract some of the numbers we need from the article, and a couple of other pages.
Current Apogee: 25922km
Current Perigee: 13700km
Current Inclination: 47
Desired Apogee, Perigee: 26189km
Desired Inclination: 55.04
Eccentricity (e): 0.23

The most efficient way to fix the orbit will to a) change the inclination at apogee, then b) raise the perigee, then c) drop the apogee. A lot of burns combine (b) and (c) into a single circularisation burn, which is generally inefficient. An alternative is to combine (a) and (b).

The formulae we’ll need here are:-
Semi-major axis: $a = R+(s_a+s_p)/2$ [4]
Velocity in orbit: $v = \sqrt{GM[2/r-1/a]}$ where r is current radius, i.e. distance from center of earth

##### Inclination

When at Apogee, the satellite is moving parallel to the earth’s surface. Its velocity v will have north/south and east/west components of $v_y$ and $v_x$ respectively.
$v_x = v.cos(i), v_y = v.sin(i)$
Plugging numbers in, we get the following current situation:
$a = 26189km, v_a=3076ms^{-1} => v_x=2099ms^{-1}, v_y=2251ms^{-1}$

Using the same $v_a$ but changing i, we can get the new $v_x, v_y$:
$v_{x\_new} = 1763, v_{y\_new} = 2522$

Therefore, we can see the burn we need is $(dv_x, dv_y) = (v_x, v_y) - (v_{x\_new}, v_{y\_new})$
$(dv_x,dv_y) = (-335, 271) => dv = \sqrt{dv_x^2+dv_y^2} = 431ms^{-1}$

##### Raise Perigee

To raise the perigee, we burn along the line of the velocity vector, injecting energy into the orbit. So, we start with the current velocity at apogee v, then find out the new velocity needed $v_{a\_new}$ for the new semi-major axis $a_{new}$.
First we computer $a_{new}$ using the desired perigee rather than the current perigee. We then calculate what velocity we’d have at apogee using $a_{new}$.
That gives us: $a_{new} = 31100km, v_{new\_peri} = 3446$ therefore $dv = 369ms^{-1}$

##### Changing inclination and raising perigee

We can combine the previous steps into a single step. To do this we take the $v_{new-peri}$ from the “Raise Perigee” step, and use that instead of $v_a$ in the calculations used for the “Inclination” step. This gives the following results:
$v_{x\_new2}=1975, v_{y\_new2}=2825 => dv=(-124,574) => dv = 587ms^{-1}$

As we can see, it’s more efficient to do this than separate – 587 vs 431+369

##### Drop Apogee

At this point, the perigee is at the correct altitude, but the apogee is too high. To fix this, at the perigee we do a retro burn to remove energy from the orbit. Using the semi-major axis calculated at the “Raise Perigee” step, we calculate the new semi-major axis for a circular orbit $a_{circ}$ and use this to calculate $v_{circ\_peri}$ at the perigee. Subtracting $v_{circ\_peri} from v_{peri}$ we can get the dv needed.
Plugging the numbers in: $a_{circ}=29900, v_{circ\_peri}=3653, v_{cur\_peri}=3723 => dv = 70ms^{-1}$

##### Circularise Perigee+Apogee in one step

An alternative to separately raising the perigee and then dropping the apogee (the traditional hoffman transfer), a burn can be made when the satellite is at the correct altitude, to change both the perigee and apogee at the same time. This is normally sub-optimal, but let’s see what happens.

We start with the orbit after the inclination is fixed. When the satellite is between apogee and perigee, it has a velocity vector which is both parallel to the earths’ surface ($v_{xy}$) and towards the earth $v_z$. To calculate this, we need the angle down at which the satellite is travelling ($\theta$). To calculate this, we start with the specific angular momentum of the satellite [5]:
$h = \sqrt{ a(1-e^2)GM }$ (where M is the mass of the earth+satellite)
The angle $\theta = cos^{-1}(h/rv)$ where r = radius at desired apogee/perigee, v is velocity magnitude at that point.
This angle can then be used to get $v_{xy} and v_z$ in the same way as we did in the inclination. The desired $v_{z-desired} = 0$ as we want a circular orbit, with $v_{xy\_desired} = v_{circ\_peri}$ i.e. the velocity of a circular orbit.
Plugging in the numbers, we get:
$\theta=10.5, v_{xy}=3327, v_z=619, v_{xy\_desired}=3653, v{z\_desired}=0$
We can therefore calculate the needed delta-v:
$dv=(269, 619) = 675$

As can be seen, the $675 ms^{-1}$ needed to do both is much greater than the 369+70 needed to do a Hoffman transfer.

### Available dV

Looking at the above, the best case scenario is to fix the inclination and perigee at the same time, and then the apogee. That would take $587ms^{-1} + 70ms^{-1} = 657ms^{-1}$ deltaV.

The galileo satellites have an empty mass of 660kg, and carries 73kg of Hydrazine. It uses MONARC-1 [2] motors, each developing 1N of thrust with an Isp of 230s.
To find the flow rate (i.e. mass of Hydrazine per unit time) we use the following (from [3]): $\dot{m} = F_{thrust} / I_{sp} . g_0$
Using that we can get a total burn time $t_{burn-max} = m_{fuel} / \dot{m}$
Plugging in numbers, that gets us a burn time of $t_{burn-max} = 16710s$ for a single motor. Using multiple motors will increase the thrust, but decrease the burn time by the same ratio.

While $F_thrust$ may be constant over time, mass doesn’t. However, we’ll simplify with a best-case scenario, using the empty mass with F=ma for $t_{burn\_max}$ seconds.
This gives us $dv_{empty} = (F_{thrust}/m_empty) * t_{burn\_max} = 250ms{-1}$

This is much less than any basic maneuvers – a best case of 657 is needed.

# Update: Data Retention and Investigatory Powers Bill

The debate within Parliament on DRIP is now largely over, after a day in the Commons and two in the Lords. At lot of points were made over the three days, some valid, some vastly less so. It was apparent that irrespective of MP or peer views on the content of the bill, there was widespread anger about the fast track process being used. I hadn’t realised that this was far from the first such, but this seemed to be a special case as all participants recognised the sensitivity of the bill and the overall lack of trust the public has in government at the moment. After watching all three day’s debate on the subject, I thought it may be of interest to a couple of my readers (literally) if I summarised the points made over the days. I’ve italicised my own thoughts, to try to differentiate between reporting and commentary. I apologise for the length of this post – there were a lot of interesting points raised in the debates, and I thought they deserved reporting.

## Key Points (aka TL;DR)

• Widespread anger amongst debate participants about the process used by the bill, but apathy by many others
• The sunset clause is generally acceptable, although not ideal
• RIPA is going to be replaced, and there are assorted reports coming up over the next couple of years which will help this
• There is a need for DRIP following the ECJ ruling, but there’s a lot of reasonable concern that this Bill may be vulnerable to some of the same issues flagged in that ruling.
• There’s less of a need for the RIPA parts of the bill, especially being fast-tracked.
• Many assert that Clause 5 of the bill does confer new powers, contrary to the government view.
• It’s obvious that many MPs and Peers don’t understand RIPA etc, and even fewer have the technical competence to understand the technical aspects.
• Typical straw men arguments were used by many bill supporters, including the government, which was frankly disgusting.
• Overall, the government were incompetent in how they assembled and presented this bill.

## Timing and Process

As mentioned, a significant amount of the debate covered the anger about the fast-track process, and other timing issues.

### Why the delay/fast-track?

Many people asked why it’s taken around three months for the bill to come to parliament, given that the ECJ ruling was hardly a surprise. The government asserts this has been due to the time taken to evaluate the ECJ ruling, and craft a Bill which meets these needs, and that they have been in discussions with communications providers. The former seems unlikely to me, and while the latter is undoubtedly true this doesn’t fully explain the delay. Participants gave other reasons: i) a ploy by the Home Office to suppress scrutiny, ii) due to disagreement within the coalition. This latter is an interesting point, which I expect will come up during the upcoming election. Generally though, there was an understanding that the ECJ Ruling meant that the data retention clauses may need fast tracking. Some people did dissent on the other clauses of the bill though, such as the Constitution Committee report on DRIP “It is not clear why [clauses 4-5] need to be fast-tracked.” The government had asserted that this was because foreign companies, which had been helpful previously, were beginning to get nervous and require a legal shield urgently. The reason for this, and the reason for the major change from 2012 when this wasn’t a problem, was explained as being due to Snowdon. That makes eminent sense to me. Overall, while I think there shouldn’t have been a need for the fast-track, or as fast a fast-track, I can understand that there is a need for urgency.

### Sunset clause

The Bill contains a sunset clause for the end of 2016. There was a lot of disagreement with this debate, with amendments proposed by the Common’s to make this only 6 months, and the Lord’s for end of 2015 – both failed/were withdrawn. The primary reason given by the government, and others, for the end of 2016 are: a) that there are numerous reports due which would help come up with new guidance, but these aren’t due for several months, b) the legislative process for a full rewrite of RIPA (as is needed) will take 6+ months by itself, c) the upcoming general election. Taking the election period first, parliament is due to dissolve on 30th March 2015 and return some time after 7th May 2015 [10]. Obviously no legislation can be progressed while parliament is dissolved, and it was asserted that it wouldn’t be feasible for anything to be done in the first six months of the new government. Furthermore, Lord Hodgson stated that “[Political] campaigns are bound to be conducted in primary colours to gain public attention. We are balancing the difficulties of issues of privacy and national security that have nuances and require light and shade, which do not lend themselves well to the hurly-burly of a general election campaign.” I find the arguments about the general election partly fair, but I think the public deserves to be able to take into account the views of the candidates and parties on the matter of data privacy. Waiting until after the election to have these discussions is a disservice to the public. Regarding the reviews and reports, please see the “Reviews of RIPA and DRIP” section. There are indeed several reports due which may be useful, although most are due prior to mid-2015, although notably the independent RUSI report and Independent Privacy and Civil Liberties Oversight Board are due after. Mid-2016 may have been a more feasible amendment. There were a couple of requests for detailed timescales to justify the end-2016 sunset. It would have been useful if the Home Office had drafted a rough indicative schedule in order to help explain why they had selected that date. Overall, I still don’t like the end-2016 sunset, but I had forgotten the general election was due, and so it’s barely acceptable to me now.

## Reviews of RIPA and DRIP

### Current/Recent Reviews

The Select Committee of Home Affairs supports the Bill, as does the Intelligence Services Committee. The ISC was informed about the Bill the day before the Home Secretary’s statement, and has discussed with the agencies. Overall, they are happy that it doesn’t “simply add to the powers” of RIPA, which would have made them uncomfortable.

### Other Reviews

An Independent Privacy and Civil Liberties Oversight Board [11] is due to be set up, with legislation coming in this session (i.e. before 30th March 2015). This will be made up of four members, and will replace the role of the independent reviewer of terrorism legislation. Their first report will be due a year after being established – so likely sometime around mid-2016. Two areas of concern were flagged up in the debate. Firstly, whether the members will have sufficient access to classified material to fulfil their role. The second was that it was noted that as even the Intelligence Services Committee apparently didn’t know about the GCHQ Tempora project, it’s unlikely the Oversight Board would be able to fulfil their role. I do have my concerns on this likewise. The Labour opposition have asserted that they want to do the following:-

• Strengthen the ISC and have an Opposition Chair (i.e. make the chair of the ISC a member of the opposition, rather than the government)
• Overhaul of commissioners – there are too many and the reports they produce are not public facing
• Change the focus of the commisioners, often they are limited to assessing compliance of existing legislation vs looking at whether legislation is still appropriate/effective

Royal United Services Institute (RUSI) are conducting an independent surveillance panel [14] which will extend beyond the 2015 general election.

## ECJ Ruling

The need for the ECJ rules comes from the confluence of two things. Firstly, the ECJ ruling that the EU Data Retention Directive was unlawful. This then left the UK implementation of that directive in danger, as it was not implemented as primary legislation. The second is the Data Protection Act, which requires companies to delete customer data as soon as it is no longer needed. If the UK retention laws were overruled, then companies would be legally required to delete any data they had retained under the UK law, which they themselves did not need to retain. There is a judicial review which asserts the unlawfulness of UK data retention legislation, which has been stayed while the ECJ ruled. This is due to report very soon, and I wonder if the government had strong suspicions that the review would conclude that it was unlawful, hence the need for the Bill. The Constitution Committee has reported that they believe the UK regulations now lack legal authority[1]. The Home Office had previously told companies to ignore the ECJ ruling, as the law remained in the UK. This was widely seen as a stopgap. The government has asserted that DRIP, together with already existing laws, meet the key issues identified by the ECJ. There was a lot of concern during the debate that this is not the case, for example while variable retention terms are added, as per the ECJ ruling, neither DRIP nor the regulations provide objective measures for determining what retention term to use. Some MPs and peers believe there are other issues not addressed. David Davis (Con): “While the Bill may be law by the end of the week, it may be junk by the end of the year.” The question of whether the new law will be safe is still not decided. The Joint Committee on Human Rights has asked for the government to publish an analysis of the ECJ ruling and how the proposed UK legislation matches up, but this has not been done. Publishing such an analysis would be useful in working out how safe the law is.

## RIPA

### Extra-territoriality

Unlike the retention aspect of the bill, there was less evidence that the RIPA extra-territoriality terms needed to be fast-tracked. The Constitution Committee said they didn’t understand why needed to be fast-tracked. [1] The government asserted several times that companies had previously been friendly and complied, on an extra-territorial basis, with exceptions, but that recently companies who were previously compliant were requesting legal cover. No reason for this was given initially, however eventually the government asserted that this is since Snowdon. However, while the magnitude of the problem may have grown recently, this was not a new problem – Lord Davies: “The Joint Committee on the Draft Communications Data Bill noted in its report (published in December 2012) that ‘many overseas CSPs [communication service providers] refuse to acknowledge the extraterritorial application of RIPA’”.” The question remains therefore why this needed to be fast-tracked now, some 7+ months later, rather than addressed during what many debaters asserted was a light legislation session. As for whether extra-territoriality is new, Jack Straw (one of the RIPA architects) confirmed that initial intent of RIPA had included Extraterritoriality. Therefore government assertions that clause 4 of the Bill don’t add any new powers are possibly true, or at least in accordance with the initial intent. A concern was raised that this reading of RIPA didn’t tally with what other people thought, and that such a reading of RIPA was a ‘secret’. I’m not convinced by this, but do agree that there is confusion due to the purposefully complex construction of the law. However, extra-territoriality isn’t the only modification to RIPA in the Bill. Clause 5 modifies the description of a “telecommunications service” in an extremely broad way. It was stated that Liberty, the lawyer Graham Smith, and others believe that Clause 5 does confer new powers – rubbishing the government’s assertion that nothing in the Bill does so. This point was only concretely discussed by the Lords – I believe the Commons somewhat missed this. An unanswered question raised in both houses was what to do if companies say no. There was some government handwaving, but no real answer given. Finally, and not in the bill itself, the government has asserted that they will assign a Senior Diplomat to look at bi/multi-lateral agreements to cover extra-territoriality, such as the Mutual Legal Assistance Treaty (MLAT) with the US. For example, “mutual recognition of national warrants”. This does make me somewhat nervous, as the US courts are not renowned for the quality of their jurisprudence in national security matters.

### What is “National Security”?

While people were generally happy about the attempt to limit RIPA use for ‘economic’ purposes to only apply to national security aspects of the economy, there was still the question of what is “national security” in this area. Katy Clark (Lab) asked, for example, whether this could be used “in a situation such as the miners’ strike of 1984-85?” There was reassurance from the government that such was certainly not the intention.

### Replacement of RIPA

Clause 7 has been added to support an investigation of RIPA, and there seems to be cross-party agreement that RIPA needs replacement. I’m pretty confident that this is going to happen in the next parliament – it’ll be interesting to see what happens. In the interim, there has been much discussion that a number of bodies which can issue RIPA requests. It turns out only 13 are being removed.

## Snowdon

Snowdon was mentioned several times, and generally when discussing several different points. Firstly, his releases were used as one of the explanations for why the public no longer trust the intelligence services and government oversight. Secondly, there was discussion about how these were handled differently in the US versus the UK. Attention was drawn to the generally lackluster reception in the UK parliament, and minimal discussion that has occurred. The proposed Independent Privacy and Civil Liberties Oversight Board was seen as one way to address this, largely aping the US equivalent set up several months ago. Thirdly, and related to the first point, there is concern over future oversight. Generally people think that GCHQ have behaved legally, although on the border of law, and that the complexity of RIPA has meant that many, including MPs and peers, didn’t fully understand what was allowed. There was concern that the government has had private or secret interpretations of RIPA, which has allowed justification of behaviour contrary to the understanding of many – for example the ‘external’ aspect of RIPA. I was generally not surprised about the interception/’external’ aspects of RIPA – I had read it in some depth – and in fact I was rather surprised that so few other people had. However, I’ve long had concerns about oversight, RIPA as it applies beyond the intelligence services, and several other related areas. Overall, if Snowdon has made a few people open their eyes, then good.

## Lords vs Commons

It was interesting seeing the difference between the debates in the Commons and Lords. I found the Lords debate much more fact based, however that could be due to the abridged nature of the debates – several of the reports referenced in the Lords were only published after the Commons had concluded their debate. The Lords also ranged somewhat more widely in their discussion, looking beyond just this immediate Bill and a bit of RIPA. This included consideration of Extraordinary Rendition, and wider security matters. The turnout for both was relatively poor, albeit no worse than many other debates. It’s possible that as this was a party vote, and so was inevitably going to pass, many couldn’t be bothered. It’s also possible that some didn’t want this around their necks for the general election. Overall, I was slightly underwhelmed by both debates, and especially that in the Commons. That said, the debate over the next 30 months(!) on the replacement for RIPA is likely to be much hotter – there are definitely strongly held opinions and this is just any early battle in a much larger war.

## Other Points

### Differentiation between types of data and aspects of RIPA etc

There appeared to be a lot of confusion in the debates between retention, acquisition, and interception, plus confusion between communications data and relevant communications data. For example Alan Johnson defended RIPA as being not intrusive by comparing the 221,000 postal items opened in 1969 with the only 2,670 intercept warrants in 2013. However he failed to note the 514,000 RIPA authorisations and notices which were also made. I’m going to assume Mr Johnson just made an error, rather than actively trying to mislead. The debate highlighted the unnecessary complexity and confusion inherent in DRIP, RIPA, and similar.

### Need for retention

Several different statistics were provided to highlight why retention and RIPA were needed. According to the CPA, “Communications data is used in 95% of Serious and Organised Crime”, however no information was given about the types of data, or types of RIPA request – there’s a big difference between a request for subscriber information and full interception, not least that the latter requires a warrant from a Secretary of State. Several concrete examples were given for why data should be retained, however it was interesting that all those given only needed twenty-four hours of data at most – something which no MP flagged up. There was however an assertion that almost 50% of communications data used in child abuse cases are more than six months old. It’s unknown whether this referred to “communications data” or “relevant communications data”.

### Communications Data Bill

Several references were made to the Communications Data Bill (aka snooper’s charter). Generally this appeared to be by fans of that Bill, who were attempting to tie the need for this Bill to the Lib Dems blocking that Bill. There was furthermore an assertion that the unpublished draft of that bill is no longer a snoopers charter.

### Secondary legislation (Regulations)

Clause 1(3) of the bill allows for Regulations to be written, which are available in draft form [4]. There are questions about when these will actually go before parliament, as they are necessary – no schedule has been given. The clause has also been flagged up as having problems by the Constitution Committee (that while it gives power to make regulations, there’s no requirement to do so) [1] and the Delegated Powers and Regulatory Reform Committee (that the powers aren’t restricted) [2]. I’m not sure I agree with this latter, as I thought 1(4) restricted the powers, but it depends very much on what “may” means.

### The Public

It seemed to me that MPs and peers don’t have a great handle on what the public think of the Bill or RIPA. Some debators asserted that lots of people cared and knew about RIPA/DRIP, others asserted that very few people knew. Personally I think that a lot of people care, but very few really know, largely due to the opaque way such legislation occurs. Quoting Baroness Kennedy (Lab) “We should always remember that it is the practice of those who draft legislation about the functions of the security services to make it as complex and impenetrable as possible, and that is what this legislation is—obscurantist lawmaking at its height. ” Additionally, few people are interested enough to do much research, I find I agree with Hazel Blears: “[the IOCCO] report has probably been read by perhaps a handful of people in this country.” Generally there was agreement amongst many that it is important to get the public on side, to have a public debate, and to build trust.

### Transparency

Dr Julian Huppert (LD) raised two amendments, which were both withdrawn in the interests of time, knowing that neither would pass. I thought both of these were excellent amendments. The first amendment proposed to require collection of data on RIPA etc requests, to provide better analysis. The government asserted that they are already going to be doing annual transparency reviews, and will look at amending the code of practice on acquisition and disclosure of Communications data later this year. Personally I’d have been happier if there was a statutory requirement rather than the government just saying “trust us”. The second amendment was to allow companies to report statistics on the number of RIPA requests received – to allow companies to provide their own transparency notices annually. The government completely disagrees, and reasserted that doing so would count as tipping off. They further asserted that it is the place of the IOCCO to report [12] on the number of requests received. While the IOCCO report is excellent, I think that statistical information cannot be too dangerous, especially when appropriately bucketed. Furthermore, the IOCCO do not report on the number, size, or duration, of retention requests – although maybe they will do so in the future.

### Private Companies

Several individuals referred to how much data private companies hold, and yet the public has no worries, whereas the government has strict rules. There was definitely an appetite to include private data in some form of future legislation. It appears to me though that there is a big difference here – I can choose which companies I work with, whereas I cannot opt-out of having RIPA requests served against my data. There was also no discussion whatsoever on the impact of RIPA on private companies – RIPA results in vast numbers of disparate requests from a number of organisations, some of whom won’t be especially familiar with the technologies being requested. I think that will need to be addressed in future legislation, possibly with a central clearing house for authorities/agencies who are less familiar with both the legislation and the companies themselves.

### Technical Competence

It was obvious that the level of technical/IT competence of many in both houses was seriously lacking. Indeed, Baroness Lane Fox (of lastminute.com fame) said as such. What terrified me most though was that some seemed to have just enough knowledge (or briefing) to be dangerous in their incompetence – Helen Goodman (Lab) (see Straw Men, below) is the perfect example of this.

### Straw Men

The debate was full of straw men arguments, especially on the government side/pro-Bill side. This is no surprise, however it was somewhat disappointing nonetheless. Still, a number of MPs and peers were cognizant of this and called it out. I must take a moment however to pour out my scorn for Helen Goodman (Lab). She represented the worst of someone briefed with just enough knowledge to be dangerous, who also seemed to believe in a binary straw-man world. I highly recommend you read her diatribe in Hansard, but be careful of spit-takes…

### Other

There were a small number of Anti-EU/EHRC debaters. It appeared the government barely, if at all, consulted with the devolved administrations. Many supporters of the Bill seemed to imply that what the police and intelligence agencies ask for, they should get, which is rather scary. Lord Hodgson (Con) captured this rather well when he said “The bottom line is that the security services and the police have told us that they need this Bill. They deserve our support because they work long hours unsung on our behalf to keep us safe. Therefore, this is a Bill they must have.” I’m hoping he was just referring to the government position.

## References

### Bill and Reports

1. Constitution Committee report on DRIP: Link
2. Delegated Powers and Regulatory Reform Committee report on DRIP: Link
3. Latest DRIP Bill (approved by House of Lords): Link
4. Draft Regulations under DRIP: Link

### Other

1. General Election Timetable
2. Independent Privacy and Civil Liberties Oversight Board Terms of Reference
3. Interception of Communications Commissioners Office 2013 Report
4. Independent Reviewer of Terrorism Legislation David Anderson QC
5. RUSI Independent Surveillance Review Panel

# Data Retention and Investigatory Powers Bill

The leaders of the three main parties in the UK parliament are in the process of railroading a bill through parliament which supporters claim are vital for public safety and national security, and detractors claim is an unnecessary and undemocratic power grab by the security agencies.

The truth, as ever, lies somewhere between these two extremes. Most of the reporting on this is very much of the he-say-she-say type, with little or no actual referral to actual facts. I thought it may be worthwhile to look at this myself.

## My position

I believe that wiretaps, monitoring, and the like are absolutely vital powers, but there must be excellent oversight, and these powers must only be used in a proportionate manner.

I think it’s disgusting that this bill is being rushed through. Yes, it likely took a while for civil servants to review what the ECJ ruling meant, and then draft the bill, but that’s no a good enough justification. I do agree that part of the delay in this was due to incompetence, rather than malice, however that’s no excuse for the power grab that’s enshrined in this bill.

Furthermore, the bill does a lot more than just meet the issues arising from the ECJ ruling, and in fact it alone doesn’t even meet those issues – some future regulations will be needed first, and these will need to be passed by parliament. So it doesn’t even meet it’s stated aims.

The bill as written has a number of areas that concern me. The Secretary of State gets a lot of extra powers in the form of regulations, which given the current views of the Home Office are likely to be abusive and expansive. The assertion that only metadata is covered is potentially false. The expansion beyond UK borders is very troubling, badly drafted, and when combined with the existing shortcomings in RIPA clarifies that any UK citizen using a non-UK-based server has essentially zero protections against their data being queried by the UK government without a warrant.

Many of the assertions being made by the government are false, and I think it’s disgusting that they are willing to lie in this manner. Protections being lauded by some politicians, such as oversight boards and the like, are not enshrined in the Bill.

This bill doesn’t address the shortcomings highlighted in the ECJ ruling, and so it would inevitably be over-ruled in the future.

Overall, this Bill shouldn’t be passed in its current form – the fact that it likely will is a sad indictment of the lack of  backbone amongst current politicians when faced with privacy or security concerns.

## DRIP

### What is it?

One big source of confusion is that the bill addresses two different things. Firstly, it addresses retention notices – this is what the ECJ ruled on. The second is warrants for obtaining the information. These two are different, and relate to different data.

On the retention front, the government can issue a retention notice to a public telecommunications operator to retain relevant communications data.

A public telecommunications operator is anyone who operates a public telecoms service (DRIP 2(1)), which is (RIPA 2(1)) any system which exists for the purpose of facilitating the transmission of communications by any means involving the use of electrical or electro-magnetic energy. [amended by DRIP 5 to include:  any case where a service consists in or includes facilitating the creation, management or storage of communications transmitted, or that may be transmitted, by means of such a system.] i.e. anything which moves data from one place/person to another, or does any storage of such. So this includes not just ISPs, and internet pipes, but also the likes of Facebook and Google.

The key word above was “relevant”. There’s a big difference between “relevant communications data” and “communications data”. It appears that “relevant communications data” is used when referring to retention, and “communications data” for obtaining the information – the latter is much broader than the former.

Relevant communications data is the metadata “of the kind” currently specified in the 2009 EC Data Retention Directive schedule. The schedule data is basically addressing, date/time, and duration of communications. A big area of danger is the “of the kind” phrase – that’s seriously vague and could be used for a lot.

Communications data is concretely much broader, from RIPA 21(4):

1. any traffic data comprised in or attached to a communication (whether by the sender or otherwise) for the purposes of any telecommunication system by means of which it is being or may be transmitted; i.e. this is horrendously drafted and could mean anything. I think it’s supposed to mean addresses etc, but also includes hashes, routing info, etc.
2. any information which includes none of the contents of a communication (apart from any information falling within paragraph (a)) and is about the use made by any person (i)of any postal service or telecommunications service; or (ii)in connection with the provision to or use by any person of any telecommunications service, of any part of a telecommunication system; i.e. Basically anything other than content. This includes subscriber information, billing addresses and details, etc. Any service which does analysis of message contents, for example Google and Facebook extracting data for advertising purposes, may also be fair game, although a case could be made that this includes the contents of comms.
3. any information not falling within paragraph (a) or (b) that is held or obtained, in relation to persons to whom he provides the service, by a person providing a postal service or telecommunications service. i.e. essentially a catch-all which includes absolutely everything _except_ message data.

It expressly doesn’t include “data revealing the content of a communication” (DRIP 2(2)).

### What data will be covered?

See the previous section for details, but basically it depends on whether you’re talking about retention, or what the government can request.

Retention: Definitely addresses, dates, times, and durations for any communication, and possibly much more.

Acquisition and Disclosure: Absolutely everything to do with internet comms and data, except the contents of the comms data itself. This includes who talks to whom, when, and how much is said. It also includes any details the service provider may keep about you, including billing information, addresses, etc. For ISPs this may not sound too bad, but things get more complicated with services such as Facebook and Google. They build up analyses of your behaviour for advertising – that may be in scope. They keep track of who you ‘Like’, groups you join, websites you visit, and so on. So, pretty much everything is fair game. Note though that this is broadly an already existing power under RIPA.

### Where is covered?

Everywhere. Or more specifically, every service that offers any service to the UK public.

DRIP 4(6) can also be used to require a foreign person/company to maintain an interception capability, including for conduct outside the UK. So, the very equipment and capabilities which the EU and UK have an embargo on for Syria (and others), can be mandated (and subsidized (DRIP 1(4(g)), RIPA 14)) in other countries by the UK government.

DRIP extends or clarifies this extra-territoriality for interception, requiring that an interception capability be maintained, and also for retention and disclosure.

### Who is covered?

Everyone, of any nationality. Nothing has changed there since RIPA/Data Retention Directive. Under DRIP section (4) this has been expanded (or clarified, as the government would have you believe) to include non-UK people, not in the UK, on non-UK servers.

### Other powers and regulations?

One of the areas of concern relates to DRIP 1(3), which allows “The Secretary of State [to] by regulations make further provision about the retention of relevant communications data.”  DRIP 1(4) does call out the restrictions on what may be in these regulations, and these don’t look too broad at a first glance, but I’m not holding my breath. One concern is that these regulations can be used to require telecoms operators to give retained data to the government without needing a warrant (DRIP 1(6)(b)), although to be fair that was already the case under RIPA section 22.

The regulations “may refer to communications data that is of the kind mentioned in Schedule to the 2009 Regulations”, which to my mind also means that it may be able to refer to other data.

DRIP 2(5) does at least require that any such regulations must be passed by parliament, however there have been no assurances provided that these won’t also be rushed through. Such regulations will have to be passed soon, and I wouldn’t be surprised if they are also rushed through: DRIP 1(4)(h) states that the regulations may include a provision that “the 2009 Regulations ceasing to have effect and the transition to the retention of data by virtue of this section.” – given that that is the stated aim of this Bill being rushed through, as the ECJ ruling has overruled the 2009 Regulations.

### Extraterritorial extensions

Section 4 of DRIP expands (or clarifies) how the retention notices and RIPA may be used outside the UK. Much of section 4 details who to address the notices to, etc. Basically, section 4 states that retention notices and warrants may be issued to non-UK companies and people, with no UK presence, if they offer a public service on the internet.

DRIP 4(4) does give some protection to foreign companies/people who receive a retention notice/warrant – local laws can be taken into account when deciding whether implementing such a warrant/notice is ‘reasonable’ and hence whether they have to comply.

RIPA 1(1) and 1(2) expressly state that it is only illegal to perform interception without warrant within the UK.

Therefore, interception can be performed on UK citizens using non-UK-based servers without a warrant, retention notices need to be issued by a Secretary of State, and access to retained and other non-interception data needs to be requested by one of a long list of designated persons.

## Government Assertions

Home secretary Theresa May has stated that lives will be lost if the legislation does not go through.

David Cameron repeatedly saying that it was essential to track and catch “terrorists, paedophiles and criminals” and warned that the “consequences of not acting are grave”.

“I want to be very clear that we are not introducing new powers or capabilities – that is not for this parliament. This is about restoring two vital measures ensuring that our law enforcement and intelligence agencies maintain the right tools to keep us all safe.

“As events in Iraq and Syria demonstrate, now is not the time to be scaling back on our ability to keep our people safe. The ability to access information about communications and intercept the communications of dangerous individuals is essential to fight the threat from criminals and terrorists targeting the UK.”

Simon Hughes MP asserted, on Murnaghan, Sky News, “We are limiting the number of people who can ask for data, fourteen bodies are no longer able to ask for data at all, all the councils in the country, every council in Britain at the moment can ask for data and that’s been consolidated into only one place for request.  David Anderson, a really good guy, who is there making sure that our terrorism legislation is working, has been given additional powers, we have a new scrutiny committee – those are Lib Dem gains.”

My response

At no point has the government explained why they need 12 months of data. Why not 11, or 13? Or 6? The ECJ ruling called out this shortcoming in the EC Directive, and it still hasn’t been addressed.

Using Iraq and Syria as examples of why this is needed is disingenuous at best, and outright attempt to play on our fears at the worst. The government can already use warrants to target collection of both data and metadata against individuals and groups involved in both of these conflicts. Why, therefore, do they need these extra powers?

The assertion that they are not introducing new powers is demonstrably false. The bill allows the Secretary of State (which actually means any one of a number of individuals) to create regulations which may vastly broaden current law – the scope allowed under section 1(3)-1(7) and 2(4), although with the caveat that such regulations would need to be approved by parliament (2(5)). Note however that such approval is often rather pro-forma.

Furthermore, while DRIP is allegedly just targeted at fixing the retention law issues resulting from the ECJ ruling, it does much more. While sections 1 and 2 of DRIP deal with retention notices, sections 3-5 deal with warrants under RIPA.

The largest power grab relates to extra-territorial aspects of RIPA. The argument is that this was implicit already in RIPA, and that’s possibly accurate. However, the specific area of concern is 4(8) amendment (5A) – this seems to allow retention/disclosure requests, on foreign companies who have any offices or offer any services in the  UK, but covering conduct that occurs outside the UK by non-UK nationals. So can the UK government make a request against Facebook for the monitoring of a US citizen in the US, and punish Facebook if they don’t comply?

As for Simon Hughes’s claims, absolutely none of these are detailed in DRIP. While they may be good in theory, these are currently paper tigers against threats to privacy.

## ECJ Ruling

ECJ ruling is all about proportionality:-

1. No “differentiation, limitation or exception” on types of traffic data
2. No “objective criterion” to ensure that authorities only have access as needed
3. No “objective criteria” on basis of period of retention
4. “Risk of abuse” – not sufficient safeguards, nor ensure that data destroyed at end of retention period
5. No requirement that data be “retained within the EU”

So how does DRIP measure up?

1. DRIP refers to “relevant communications data”, by which it means the Data Retention Regulations 2009 Schedule. This specifically calls out what metadata should be retained. So the type of data is limited. In theory DRIP 1(1) also limits collection to only that which is proportionate under RIPA 22(2). However, the list of grounds are so broad that I wouldn’t be surprised if retention notices are issued against all users, “in the interests of national security” or similar. So DRIP probably passes the ECJ test, but it’s hardly reliable.
2. Within the UK, access is limited under RIPA 22(2), and may only be granted by a “designated person” (of which there are rather a lot). This is probably sufficient, although as noted in (1) these are so broad that they can apply to a lot of situations.
3. There is still no objective criteria for period of retention, so this is a fail. This may be coming in the regulations mentioned in DRIP 1(3) and 1(4).
4. The protections in RIPA may be sufficient for the “Risk of abuse”, although there is no mention of oversight. There’s currently nothing ensuring destruction, although this may be coming in the future regulations, so currently this is a fail.
5. There’s still no requirement that data be retained within the EU, although this may be clarified in the future regulations.

For (1), (2), and (4), the important question is one of oversight. The justifications for retention and disclosure are so broad that anyone can be caught up with them – any oversight body must ensure the proportionality of both retention and disclosure.
For (3), (4), and (5), the future regulations are vital – and we’ve no idea what they say until they are published.

So, overall, DRIP does not currently address the issues within the ECJ ruling, although it may do so in the future, when the Secretary of State publishes some new regulations, which are allowed under DRIP, but which must also be passed by parliament. Given this, it makes you wonder why the rush to pass DRIP, if the ECJ issues won’t be addressed until the regulations are published, at some future date.

Data Retention and Investigatory Powers Bill – the draft bill being proposed by the government https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/328939/draft-drip-bill.pdf

Regulation of Investigatory Powers Act 2000 – the act being amended by this bill http://www.legislation.gov.uk/ukpga/2000/23/contents

Anti-terrorism, Crime and Security Act 2001 (Part 11) – Required retention of communications data, and in section 102 defines “communications data” http://www.legislation.gov.uk/ukpga/2001/24/part/11

ECJ Ruling – This is the ruling which ruled that blanket collection was illegal, and which allegedly forced the government to push this bill through http://curia.europa.eu/jcms/upload/docs/application/pdf/2014-04/cp140054en.pdf

Data Retention (EC Directive) Regulations 2009 Schedule – Lists the data to be retained http://www.legislation.gov.uk/ukdsi/2009/9780111473894/schedule

## Afterword

FYI, I have just sent my MP the below email.

Dear Paul Burstow,

RE: Data Retention and Investigatory Powers Bill

I am writing to ask you to consider voting against the upcoming Data
Retention and Investigatory Powers Bill, or as a minimum vigorously
participating in the debate against it.

I think it is highly undemocratic and frankly disgusting that the Bill
is being rushed through in this manner. The government has been aware
of the issue arising from the ECJ ruling since April and have done
nothing to address them – although maybe this should be ascribed to
incompetence rather than malice. The Bill as drafted will not even
address all the current shortcomings highlighted by the ECJ ruling –
rather a Secretary of State will need to create, and parliament pass,
additional regulations, so there should be no need to push through the
retention parts of the Bill so urgently, without also making the
proposed regulations available even in draft form.

Liberal Democratic MPs have been trumpeting additional protections they
have wrestled from the Tories, however only two of these are actually
present in the Bill – that of a maximum twelve month retention period
and an expiry in 2016. For the former, as this is something the ECJ
ruling had already hinted at, it was hardly a major win. For the
latter, that’s a valid win, although also not a major one – the Bill
may be reinstated in the future without any guarantee of a larger
debate on privacy versus security, and the roles of RIPA, CMA, DRIP,
the Anti-terrorism, Crime and Security Act, and others. As stated, any
other protections are not in the Bill, and so could be dropped,
amended, or varied as desired by the whim of any government.

Contrary to claims being made by both LD and Tory MPs, the Bill does do
more than just deal with the retention issues arising from the ECJ
ruling. Three of the five main sections deal with investigatory powers,
and not retention. The Bill is rather confusing because of this, as it
contains and doesn’t clearly differentiate between retention,
acquisition of retained (or other) data, and interception, nor
differentiate between the different data (“relevant communications
data” versus “communications data”) which may be requested for each,
plus it uses vague terms such as “data of the kind mentioned”.

To summarise, the Bill doesn’t just do what the government says it
does, and doesn’t fully do what the government says it does. The
protections being trumpeted cannot be relied upon. And finally, even if
there is need for an urgent Bill to meet the ECJ ruling, this isn’t it.

For more details on my thoughts, please feel free to see my blog post
at http://goo.gl/HT1PkW or  contact me via email or telephone.

Yours sincerely,

Ian

# Thoughts on Scottish Independence

I recently read an article on Scottish Independence at Bella Caledonia giving several reasons why the author had opted for independence. While I agreed with many of his thoughts, I disagreed with his conclusions. So I thought I’d address each of his points, below.

## 1) Less Extremist Government

I don’t disagree with the argument, but I do with the conclusion. If, as the author contends, Scotland is more left than rUK (which I do agree with), he states that the logical conclusion is for Scotland to secede as it can have little impact on the Westminster government and the madness that will emanate from there. I disagree with this conclusion – Scotland can, together with other parts of the UK, help tame the excesses of a kneejerk lean to the right. Just because your parties aren’t in the majority doesn’t mean you can’t help make an extremist government less extremist, by voting against laws, participating in debates, and convincing the backbench MPs who will be willing to vote against their own party.

So, which is better for Scotland? A leftish Scotland with a more extreme right-wing neighbour to the south, or being part of a less extreme union, and with more devolved powers to further lessen the impact in Scotland? I contend that the latter is better.

## 2) Renewables

As noted in the article, Scotland is blessed with lots of opportunities for Wind and Hydro power, which helps account for Scotland’s success in this area. I am also disgusted by the current rush to fracking.

Where I think the author is in error is his hope, and later assumption, that an independent Scotland will put in place an energy policy that is pro-renewables. There is no evidence this will actually happen. However, there is plenty that a Scotland within the union could do in this space, through planning rules, and there is no reason why energy policy couldn’t be devolved to Scotland – meeting the author’s aims without independence.

## 3) Nukes and a Neutral State

Nuclear weapons are safe, until detonated, and it’s highly unlikely that this would happen accidentally – in the 69 years from their creation it has never happened, despite the oft crude design of early versions. HMNB Clyde is 12 miles from Dumbarton, the closest town of any note, and this is well outside the blast and flash radius for any warhead stored at HMNB Clyde. Yes, there could be a fallout problem, but that would affect all of Scotland, and so the location is irrelevant.

So their proximity to Glasgow is irrelevant, other than in a case of nuclear war, in which case their location would be moot as Glasgow would almost certainly be nuked anyway.
As a neutral country, Scotland wouldn’t sit on the UN Security Council other than very rarely, when they may be elected as a non-permanent member for two years. They wouldn’t have the power of veto. So a neutral Scotland would be largely as important on a world stage as Ireland – that is to say not especially.

The Scottish arms industry involves 30k people, plus Scots make up around 7-8% of the UK military – getting out of the arms industry sounds noble and is, but what are you going to do with the ~50k newly unemployed and £1.8billion loss to the Scottish economy?

## 4) Not being at war

Again, I can’t disagree with the authors hopes, and I agree that a weakened Scotland and rUK will be less likely to go to war. I understand that the author probably doesn’t care that the Falklands would likely be invaded by Argentina and we would be unable to recapture it – hell, given the current state of our armed forces I wouldn’t be surprised if it happens anyway. Realistically though, Scotland should just do away with its armed forces altogether – £2.5 billion won’t buy you much, and it won’t be useful for much.

I’m being serious here – why even bother having any armed forces?

## 5) To Remain in Europe

While it may seem logical that if Scotland secedes that it should be automatically accepted into the EU, that’s not what any treaties say. Scotland will need to apply for membership. Some countries like Spain may put up a fight – they don’t want to set a precedent for their Basque community. Others will fight for. Likely Scotland will be accepted, but at the earliest this will take several years – what happens in the interim?

As soon as Scotland is independent, it loses EU membership. What happens between then and the point in the future when EU membership would be resumed?

As for the argument about the rest of the UK, I too am nervous, but only a little. I don’t expect a majority of the country would back our leaving the EU – protest votes for UKIP likely won’t equate to actual action. To be fair there is terrifyingly a chance that a voting majority may – the incompetence of the Better Together and Proportional Representation camps has me worried about their abality to get out the vote. With Scotland as part of the UK it would be less likely that the vote would go UKIP’s way.

Similar to (1), which would you prefer – an independent Scotland which will not, for an unknown but likely non-neglible period of time, be a member of the EU, possibly with a non-EU neighbour to the south, or a unified UK which will most likely remain a member of the EU.

## 6) More Immigration

Likewise I’m sickened by the xenophobia in the UK. However, one reason why Scotland may be less xenophobic than England (excluding xenophobia by Scots against the English, of course) may be that there has been much less foreign (i.e. non rUK) immigration to Scotland than the rest of the UK.

We should also differentiate between ‘legal’ and ‘illegal’ migration – there has definitely been vastly less impact from illegal migration in Scotland vs southern England purely due to the fact that England is a big buffer zone which will filter the majority out long before they reach Scotland.

Personally I am very pro-immigration – they have been a net benefit to our economy and I hope it will continue. There are issues, but these can be addressed without blocking immigration. Yes, Westminster is making noises about getting stricter on immigration but this is to address the problems, and not the benefits – for example should an immigrant be able to immediately claim benefits on entering the country? I’ve no problems with people migrating to the UK to work, but there is a minority who come here for other reasons.

A pro-immigrant Scotland could suffer from the same problems as England, unless it also tried to put similar controls and restrictions in place as Westminster is considering. Given this, would you be any better off as an independent country?

As an aside, as a member of an independent Scotland that isn’t part of the EU, the author wouldn’t have been able to live and work in France in their 20s – which comes back to (5), above.

## 7) Against Mass Surveillance

Again, I find myself agreeing with author’s thoughts, but disagreeing with his conclusions. Laws such as RIPA, despite their many faults, do partially limit the surveillance within the UK. An independent Scotland would have no such protections – as a foreign country the UK intelligence agencies would essentially have carte blanche to spy on Scottish citizens for whatever reasons they wish. Scotland will have no real counter-intelligence capabilities. And as stated, Holyrood care as little about privacy as Westminster – the author’s hope that a constitution would reduce levels of citizen surveillance are unlikely to be met.

It should be noted that the aims of the intelligence agencies include strengthening the economy of the UK. How well do you think Scotland will do against rUK when you have zero intelligence-gathering capabilities and are up against experts?

## 8) NHS Privatisation

The author seems to be arguing with himself here – NHS Scotland is already safe (prior to devolution, in fact). Furthermore, Scotland spends more per head than England – are you sure this will be affordable in a future independent Scotland?

## 9) Electoral Reform

I also am in favour of electoral reform, although I seem to be in a minority because I actually quite like the House of Lords, as a necessary evil in stopping or at least slowing down kneejerk legislation.

However, an independent Scotland is just as likely as a non-independent one at having any kind of reform – there’s no chance. Only just over a third in Scotland voted for PR, despite 50% turnout (the second highest in the UK).

As an aside, “while we can all arrive at an informed opinion” it’s still depressingly rare that people do so – and when a participatory democracy relies on the education of the participants then this is much more dangerous than a representative democracy.

## 10) Scottish Republic

Largely irrelevant for this argument.

## 11) To Avoid the Backlash

This argument could also be used for voting against Scotland. An independent Scotland will still need to rely on rUK for a great deal, and a petty Westminster could say “screw Scotland”.

Examples include shipping all our illegal immigrants into the pro-immigration Scotland, closing the large amounts of rUK public sector offices etc in Scotland and moving them to rUK (thus causing an immense glut in unemployment in Scotland), disallowing use of the Scottish pound, choosing to primarily import electricity from France rather than Scotland, charging much more for treatment of Scottish nuclear waste at Sellafield, and so on.

## 12) For A’ That

I think the author is underestimating the importance of economy. Yes, Scotlands per-capita GDP is high, while Oil and gas is flowing, but that’s hardly a certain future. After that, Scotland has problems. But that’s an argument for another day.

Overall, I greatly sympathise with the author’s views, but not his conclusions. Overall his view seems to be “I don’t like Westminster, and the direction things are going, so rather than help fix things I’d rather Scotland go it alone and leave the rUK to screw things up themselves.” Personally I believe a strong Scotland as part of the UK could help limit the damage of the current lurch to the right, just as a strong England has at times helped limit the damage of a Scottish lurch to the left. We’re better together – apart there is an increased chance that the short-sighted idiocy of a few may cause problems for the many.