Category Archives: Privacy

Privacy Risk Assessment Workshop

On January 20th (a Saturday!) I spent a few valuable hours with fellow practitioners in a privacy risk workshop kindly organised by Professor Eerke Boiten at De Montfort University in Leicester.

I presented a brief overview of the way I’ve started to carry out very basic risk assessments focussed on privacy impact to Data Subjects.

This is the paper I showed that describes my simple privacy risk assessment methodology.

Feel free to use it, modify it, discuss it and improve on it. I think Eerke will organise another event so if you’re interested follow me @withoutfire, @EerkeBoiten or any of the other participants (@SandreJ, @TrialByTruth, @LynnFOI, @TheABB, @FOIkid, @RMGirlUK, @MissIG_Geek) for further discussions.

PECR and Affiliate Marketing

Over the past 12 months, the ICO has developed a significant approach on the use of affiliate marketing and the applicability of the Privacy and Electronic Communications (EC Directive) Regulations 2003 (PECR).

In November 2016 the ICO undertook a review of the use of affiliate marketing in the gaming industry [1]. When writing to affiliate marketers the ICO clearly described how it viewed the affiliates’ role:

“The ICO’s opinion is that where and affiliate sends an SMS on the half of or promoting the website of, a gaming company, then that affiliate is the sender of that communication and must comply with regulations 22 and 23 of the Privacy and Electronic Communications (EC Directive) Regulations 2003.” [2]

Article 22 of the PECR states:

22 (1) This regulation applies to the transmission of unsolicited communications by means of electronic mail to individual subscribers.
22 (2) Except in the circumstances referred to in paragraph (3), a person shall neither transmit, nor instigate the transmission of, unsolicited communications for the purposes of direct marketing by means of electronic mail unless the recipient of the electronic mail has previously notified the sender that he consents for the time being to such communications being sent by, or at the instigation of, the sender.

I have emphasised the word instigate in 22(2) because it is the interpretation of this word that is important in this matter. In November 2016 in the press release announcing the Monetary Penalty Notices (MPNs) issued to Silver City Tech Ltd and Oracle Insurance Brokers Ltd the ICO stated:

“Affiliate firms are like postmen, delivering the message. It’s the people behind the message whose job it is to make sure it complies with the law. They must make rigorous checks to ensure the rules have been followed.” [3]

The ICO’s position appears to be that the affiliate is the transmitter (sender) of a communication but that that the beneficiary is the instigator of the transmission of a communication. Potentially leaving both the instigator and sender of an electronic marketing communication open to enforcement action.

This analysis is further backed up by the MPN [4] and Enforcement Notice [5] issued to Vanquis Bank Limited (VBL) in October 2017 where the ICO clearly states that the beneficiary of the affiliate marketing company’s activity is the instigator of the communication, and therefore has an obligation to comply with PECR Article 22.

Whilst VBL did not send the emails itself, it contracted with the third party affiliates to send the messages on its behalf. The aim of the messages was to promote VBL credit cards. The Commissioner is therefore satisfied that VBL was the instigator of the direct marketing email messages. [6]

As the instigator of the direct marketing e-mail messages, it was the responsibility of VBL to ensure that valid consent to send those messages had been acquired. [7]

My view is that the ICO is using the ambiguity in PECR to ensure that companies cannot just pass the blame for inappropriate marketing by determining a relationship as affiliate marketing where the relationship is more correctly described as a postman paid by results. The ICO’s position is that this type of affiliate relationship is essentially the same as a contract with a mailing house to deliver a company’s marketing message – a relationship where the company benefiting from the marketing would be wholly responsible for compliance with PECR and making sure it had consent from the recipients of the electronic marketing message. The fact that a third party’s reward is based on results rather than a fixed fee for sending messages is immaterial in the ICO’s view.

Although an affiliate marketer is a data controller in its own right – and in PECR terms the sender of the message – identifying the beneficiary of the activity as the instigator of the marketing message allows the ICO to use PECR (not the DPA) to target known brands with a reputation to protect, and companies with more positive balance sheets than affiliate marketers. I suspect this is a calculated move as the ICO has determined that targeting the beneficial initiators will be a more successful enforcement strategy.

The ICO’s test therefore will be the degree to which the affiliate appears to be a reward-based postman or an independent company that also promotes other companies’ products/services alongside their own. I suggest that this assessment would look at the arrangement the benefitting company has with the affiliate and to what degree it has control over the volume of email sent, the format, targeting and language used in the message.

It is possible that the ICO’s approach based on the interpretation of instigator in this way would be open to challenge at the Information Tribunal, however none of the parties that have been subject to this interpretation have yet appealed.


  1. | 10 November 2016
  2. ICO letter to gaming companies, revealed in Disclosure IRQ0662096 | 23 January 2017
  3. | 29 November 2016
  4. |
    4 October 2017
  5. |4 October 2017
  6. VBL MPN (n4) paragraph 44, and VBL EN (n5) paragraph 13
  7. VBL MPN (n4) paragraph 45, and VBL EN (n5) paragraph 14

Is your employees’ privacy one of the first casualties in the battle to secure your information systems?

I’m speaking about the trade off between network security and employee privacy at the International Association of Privacy Professionals (IAPP) European Data Protection Congress in Brussels on the 2nd December.

In the face of modern cyber-threats, communication monitoring and surveillance are essential for the protection of corporate information. But monitoring technology is often intrusive of the privacy of system users and, ironically, the capabilities of modern cyber-solutions can bring increasing privacy risks for system users. What are the threats to user privacy of IT monitoring and surveillance tools that allow network communications to be retained for subsequent analysis and replay? What are the legitimate expectations of privacy in the workplace? How can the tensions be reconciled? Here, we will examine the threats presented to the privacy of system users by latest-generation monitoring technologies. We will explore the challenges involved in reconciling the need for robust system security with legal obligations to respect the privacy of system users. We will also consider strategies for managing these challenges and associated legal risks, including PIA and security risk assessments.

What you’ll take away:

  • An understanding of the privacy risks posed by latest-generation monitoring technologies.
  • Strategies for minimising privacy risks, including an appreciation of the role of consent in programmes of workplace surveillance both now and under the draft GDPR.

I’m really pleased to be co-presenting with Heledd Lloyd-Jones, a specialist privacy lawyer with Bird & Bird. Heledd sparked my interest in the intersection of privacy and information security seven years ago when I attended her brilliant ISEB Protection training course.

There are lots of other really interesting sessions at the conference, I’m really looking forward to The Ten Million Dollar Question: Managing Privacy Risks in Your Supply Chain and Cloud Privacy: How Do International Certification Standards Fit with the Proposed EU Regulation?

Registration for the conference is open now.

Grand Central: Great trains, terrible terms

Recently I travelled to York on Grand Central Railway. I really like their train service because you pay the same fare whether you buy your ticket in advance, at the station, or on the train. I really dislike the terms and conditions for using their on-board wi-fi.

“Grand Central reserves the right to include the name, address and other relevant information relating to the User in a directory for the use of Grand Central users or other third parties, unless specifically requested by the User in writing not to do so.”

As a fair processing notice designed to let the user know what Grand Central will do with your data, this fails.

  • I guess by ‘directory’ they mean ‘database’. Directory is a terrible word to use, as most people’s mental model will be of something that’s open to anyone to consult – like a telephone directory.
  • It doesn’t say what use will be made of the data, just the types of people (Grand Central users and other third parties) who can use it.
  • It gives no indication of what could be relevant information. It could mean that they collect details of all the web sites you visit when using that connection, and add those to their ‘directory’.
  • If you were to apply the Information Commissioner’s Principle One test – what would the user expect Grand Central to do with their data?

Needless to say, I didn’t use the wi-fi, but emailed their customer service department once I was back on a real connection. Their response was:

“This is a generic condition from our WiFi service provider. The only detail we collect is email address and we may use this from time to time to contact users with details of Grand Central, offers and promotions. If you wish to be removed from the directory please inform us in writing.”

Which is a much better statement of the data they are collecting, and what they plan to do with it — essentially the fair processing notice that should have been available for using the wi-fi.

There are some lessons here:

  • Telling a user what data you’re collecting and what you are going to do with it is one of the fundamental principles of the DPA.
  • If you use generic text from someone else, then you risk being in breach of the first and second data protection principles.
  • Breaching the DPA at best gets you a letter from the ICO, and perhaps you’re added to his list of ‘potential incompetents’. After all, if you can’t write a basic statement of what you’re going to do with people’s data, you might be equally relaxed about how you look after it. Perhaps all the routers and file servers at Grand Central still have their generic passwords?

Data Sharing and the Blue Badge Parking Scheme

Back in 2008 the government announced that they were going to reform some of the ways the disabled parking / blue-badge scheme worked to reduce the amount of fraudulent use. When I heard this discussed on the radio, the government’s spokesman talked about providing £10 million towards a data sharing scheme to enable a council parking attendant to check on the validity of a blue badge issued by another council.

I have a knee-jerk adverse reaction to the words “government” and “data sharing” – especially when they are used in the same context as “the prevention and detection of crime”, so I checked out the strategy document (PDF) on the Department for Transport’s (DfT) site and was pleasantly surprised to find a sensible proposal:

“The preferred option going forward is to create a system which allows sharing of data through the linking of existing local authority databases. DfT will provide local authorities with up to £10m in funding over the next three years to establish a system of data-sharing.”

That was back in October 2008, and now a consultant has finished a survey of all the IT systems local councils use to administer the scheme, the DfT is starting to run data sharing workshops with local councils, beginning to design the system (December status update – PDF).

In the meantime Rochdale council has made a successful bid to the Government Connect benefits realisation fund to investigate the “establishment of a national database with local access” for the blue badge scheme.

So, it will be interesting to see if a distributed approach is maintained and I’d like to offer my suggestions so that privacy is built in from the start. Because when you look at the problem, there is probably no need to share data.

Implement a simple question and answer approach. Not data sharing and not a centralised database.

Whose data is it?

People apply to their local council to issue a permit, so it is the job of the local council to look after that data. It’s the permit holder’s data that they entrust to the local council and in Data Protection Act terms, the local council is the Data Controller. The name of the issuing council is written on the permit along with a permit number (that also identifies the gender of the owner) and the date the permit expires.

Who needs to access it?

Parking enforcement officers from all over the UK (and perhaps eventually Europe) don’t need access to any more data than is written on the permint.

All they need is the answer to one question: “is this permit valid, invalid or being used illegally?”.

They don’t need to see any of the information that the issuing council has about the permit owner.

A parking officer may also like to report a concern to the issuing council – that they suspect the permit may be being used illegally. Sending this information to the council that issued the permit would then allow the council to get in touch with the permit holder directly. This keeps the relationship between the local council and the permit holder and doesn’t make the permit holder subject to potentially inconsistent actions of parking attendants anywhere in the country.

A network of local databases:

From a technical perspective, the system constraints are simply this:

  • Each council needs to keep the responsibility of looking after the data of their permit holders.
  • Other authorities (who are properly authorised and validated by the issuing council) need to be able to ask a question of this information, and receive an answer.

So here’s one way of building this system.

Each council maintains their own database of permits and permit holders (as the DfT initially suggests). They look after the security of the data and they don’t export the data to any other system.

Each council issues all of the other councils an electronic access key that allows them to ask a validity question from the issuing council’s database.

Whenever a parking enforcement officer needs to check whether a permit is valid, they send:

  • The permit ID in question
  • Their ID (e.g. their badge number – something that can individually identify them)
  • Their council’s access key

to the council that issued the permit (they can read this from the permit). The issuing council would then reply with one of four answers:

  1. We didn’t issue that permit. (It’s probably a forgery.)
  2. We issued that permit, and the permit is valid.
  3. The permit is invalid (it may have just expired — this allows the issuing council to set their own grey-area) so doesn’t confer any rights to disregard parking restrictions.
  4. The permit is invalid and has been reported stolen or withdrawn by the issuer and should be seized.

The parking attendant can then perform the relevant statutory actions.

No personal data needs to be shared between the issuing council and the parking attendant, wherever they are in the country.


  1. I’m not an expert on parking, permit fraud or enforcement. There may be many reasons why this simple query / answer approach wouldn’t solve the problems with fraudulent permit use. However, this is the best place to start. If people think that a parking enforcement officer needs more information then they should make the case for this. It is always best to share the minimum amount of data necessary to remain compliant with the third (only get and use data you need) data protection principle.
  2. I’ve simplified this discussion to the broad question of data copying, data sharing or my preferred question:response which would share the minimum of personal information. There’s a separate technical discussion about the best way of achieving this, and whether it would be best implemented using public-private key encryption, with a central-key management system operated jointly by all councils. There would be some other issues to explore around how long a key is valid for, and how a local council revokes another authority’s access.
  3. I’d also be tempted to consider whether using near-field RFID chips in the permits would add value to the system and make the permits harder to forge. It would also reduce the frequency of number keying errors by a glove-wearing parking attendant on a cold day, as their terminal would just be able to read the permit ID through the windscreen.

The future of privacy talk at ORG

Bruce Schneier spoke on the subject of The Future of Privacy at the Open Rights Group on Friday. The ORG is the ‘UK equivalent’ of the EFF and I’m proud to be one of its founder members. I’ve heard Bruce speak a few times, most recently at WEIS 09, and I’ve always been impressed at his relaxed presentation style. This was a great event and ORG will be posting has posted a video of the event on its web site. I’d recommend watching the both the presentation and the Q&A afterwards.

UPDATED: Here are the links to the presentation and the Q&A.

A few highlights (with comments):

  • In relation to large government databases, built to facilitate data mining techniques for suspicious activities, Bruce commented that if you’re looking for a needle in a haystack, it doesn’t seem very sensible to add more hay!
  • On CCTV he posited that we’re living in a unique time. Ten years ago there were no cameras, now there are hundreds of cameras and we can see them all, in ten year’s time there will be many hundreds of cameras, but we won’t be able to see any of them.
  • When ‘life recorders’ become widely used (and they’d only need about 1TB a year to record your entire life) he could see that not having an active life recorder would be seen as suspicious — much like leaving or turning off your mobile phone is now presented as “evidence” that you were up to no good.
  • Ephemeral conversation is dying.
  • The real dichotomy is not security v privacy, but liberty v control. He argued that privacy increases power, and openness decreases power. So citizens need privacy and governments need to be open for a balanced democracy to prosper.
  • The death of privacy has been predicted for centuries (for instance, see Warren and Brandeis’ The Right to Privacy published in 1890). Without a doubt privacy is changing and this is a natural process — but it isn’t inevitable. Our challenge is to either accept this, or to reset the balance between privacy and the mass of identity-based data gathered for commercial gain and state security. Laws are the prime way to reset that balance.
  • When asked the one thing he’d like to change, he replied it would be to implement European style data protection legislation (like our own Data Protection Act) in the US.

An analysis of the T-Mobile breach

There’s been a lot in the press for the past few days about the recent T-Mobile breach. Basically it appears that a number of staff at the mobile phone company have been selling customer data which included the customer’s name, their mobile number and when their contract expired. There hasn’t been a great deal of information about this other than the BBC’s report, the Information Comissioner’s press release (PDF) and a short post on T-Mobile’s customer support forum.

From an information security and Data Protection Act compliance perspective there are three breaches of the Act.


There’s no information how the data was extracted from T-Mobile’s system and I accept that it could have been by people copying the information down onto pieces of paper, however I’ll assume that as the BBC story talked about “millions of records from thousands of customers”, there was a bulk extract of data.

T-Mobile is probably in breach of the seventh principle in that they failed to ensure:

“Appropriate technical and organisational measures shall be taken against unauthorised or unlawful processing of personal data”

It is a breach of section 4(4) of the Act if a data controller fails to comply with the data protection principles in relation to all personal data, and the Information Commissioner (for the moment) can commence enforcement proceedings against the company, in the course of which T-Mobile will have to undertake to implement better security and processes.

However what’s interesting to me is whether T-Mobile had ever properly quantified the commercial value of information about a customer’s name, mobile and contract expiration date? And if so whether this was adequately reflected in their risk analysis?

If this were the case then two technical steps I’d expect them to have taken would have been:

  1. to make it very hard for people to run and save a report that had more than (say) 20 such records (most people working in customer service wouldn’t even need this many records in a report)
  2. to implement some Data Leakage Prevention (DLP) technology that looked at the type of data moving out of the organisation in email, on removable media such as CDs, USB sticks and as physical printouts

The employee / employees

The employee(s) [the T-Mobile site now appears to indicate that it was just the action of a single employee] have committed a clear offence under Section 55(1) of the Act.

“A person must not knowingly or recklessly, without the consent of the data controller obtain or disclose personal data or the information contained in personal data”

If convicted they’ll receive a maximum of a £5,000 fine (and if the Information Commissioner gets his way then next year this could be a custodial sentence).

The data recipient

The person buying the data has also committed a Section 55 offence as they obtained the data without T-Mobile’s consent.

The identity of the person or company who purchased the data hasn’t been made public. It will be interesting to see whether it was a small phone dealer, a broker or one of the other big mobile phone companies. If the latter the there’s a real issue to explore – was this the action of a ‘rogue’ salesperson or something that was tacitly condoned by the organisation?

For a market to exist in personal data there has to be both a buyer and a seller, and the value of the data is defined by the buyer: if no one wanted to buy this information then the T-Mobile employee(s) wouldn’t have stolen it to sell. If the data was traded through a list broker then still the recipient organisation should have asked themselves where this data came from as alongside the section 55 offence they will have breached the first (be fair when you get, use and share data) and second (tell people what you will do with their data, do nothing more) data protection principles.

When this case finally comes to court I’ll be really interested to see the action taken against the purchasers of the personal data.

In the future I expect to see all databases that hold personal information equipped with full read auditing which would create an audit log entry whenever a user read an individual record, or ran a report that included that record.

Audit: User JohnDoe viewed this record at 10:23 on 22/10/09
Audit: User JaneDoe included this record in the report CustomersAboutToLeave at 19:47 on 23/10/09

I’d also expect mobile phone companies to correlate the read activity of their users (recorded in this type of audit log) against the customers who went elsewhere at the end of their contracts.

Yet another meaning for C, I and A

Yesterday I heard Andy Smith, the Chief Security Architect for the Identity and Passport Service (IPS) speak at the BCS Central London branch meeting about the security behind the new National Identity Register which supports the National Identity Card.

On one slide he highlighted what he considered the three biggest threats to Information Security:

  • Complacency
  • Apathy
  • Inattention (Andy called it Human Error, but I hope he’ll excuse my re-wording to fit into the familiar triad)

So now there’s three security meanings for C, I and A.

  1. Confidentiality, Integrity and Availability : The original
  2. Common Sense, Intent and Application : Plan on doing sensible things well, and keep doing them
  3. Complacency, Inattention and Apathy : It is really hard for humans to do security things 100% of the time

Andy’s presentation was really interesting and I’m glad to have had the opportunity of hearing his views, but in my view the session failed to address the publicised topic of “ID Cards: The end of the Private Citizen – or good corporate ID management?” There wasn’t a speaker to address whether this was the “end of the Private Citizen” and questioners were discouraged from being “too political”. As IT professionals it is really important we participate in the debate about state-wide databases and the consequences of insecurity and secondary uses. That’s not a political discussion, but a socio-technical discussion about the future application of technology. The UK chapter of the ISSA held a similar event in July this year which included former home secretary David Blunkett, a speaker from the Home Office, Pete Bradwell from Demos along side many technical presentations. Perhaps it was the table I was sat on but our discussion ranged widely through technology, security and ethical issues.

At last night’s BCS event I’d have like to have heard Andy talk more about the technical details of how his team resolved some of the many interesting challenges they will have faced over the past few year, especially the architectural solutions and processes devised to maintain separation of duties within the IPS.

As a root identity provider the ID card and the NIR are attractive, however I can’t help thinking of Bruce Schneier’s 2007 essay on The Risks of Data Reuse which ended:

“History will record what we, here in the early decades of the information age, did to foster freedom, liberty and democracy. Did we build information technologies that protected people’s freedoms even during times when society tried to subvert them? Or did we build technologies that could easily be modified to watch and control? It’s bad civic hygiene to build an infrastructure that can be used to facilitate a police state.”