Author Archives: John

Data Sharing and the Blue Badge Parking Scheme

Back in 2008 the government announced that they were going to reform some of the ways the disabled parking / blue-badge scheme worked to reduce the amount of fraudulent use. When I heard this discussed on the radio, the government’s spokesman talked about providing £10 million towards a data sharing scheme to enable a council parking attendant to check on the validity of a blue badge issued by another council.

I have a knee-jerk adverse reaction to the words “government” and “data sharing” – especially when they are used in the same context as “the prevention and detection of crime”, so I checked out the strategy document (PDF) on the Department for Transport’s (DfT) site and was pleasantly surprised to find a sensible proposal:

“The preferred option going forward is to create a system which allows sharing of data through the linking of existing local authority databases. DfT will provide local authorities with up to £10m in funding over the next three years to establish a system of data-sharing.”

That was back in October 2008, and now a consultant has finished a survey of all the IT systems local councils use to administer the scheme, the DfT is starting to run data sharing workshops with local councils, beginning to design the system (December status update – PDF).

In the meantime Rochdale council has made a successful bid to the Government Connect benefits realisation fund to investigate the “establishment of a national database with local access” for the blue badge scheme.

So, it will be interesting to see if a distributed approach is maintained and I’d like to offer my suggestions so that privacy is built in from the start. Because when you look at the problem, there is probably no need to share data.

Implement a simple question and answer approach. Not data sharing and not a centralised database.

Whose data is it?

People apply to their local council to issue a permit, so it is the job of the local council to look after that data. It’s the permit holder’s data that they entrust to the local council and in Data Protection Act terms, the local council is the Data Controller. The name of the issuing council is written on the permit along with a permit number (that also identifies the gender of the owner) and the date the permit expires.

Who needs to access it?

Parking enforcement officers from all over the UK (and perhaps eventually Europe) don’t need access to any more data than is written on the permint.

All they need is the answer to one question: “is this permit valid, invalid or being used illegally?”.

They don’t need to see any of the information that the issuing council has about the permit owner.

A parking officer may also like to report a concern to the issuing council – that they suspect the permit may be being used illegally. Sending this information to the council that issued the permit would then allow the council to get in touch with the permit holder directly. This keeps the relationship between the local council and the permit holder and doesn’t make the permit holder subject to potentially inconsistent actions of parking attendants anywhere in the country.

A network of local databases:

From a technical perspective, the system constraints are simply this:

  • Each council needs to keep the responsibility of looking after the data of their permit holders.
  • Other authorities (who are properly authorised and validated by the issuing council) need to be able to ask a question of this information, and receive an answer.

So here’s one way of building this system.

Each council maintains their own database of permits and permit holders (as the DfT initially suggests). They look after the security of the data and they don’t export the data to any other system.

Each council issues all of the other councils an electronic access key that allows them to ask a validity question from the issuing council’s database.

Whenever a parking enforcement officer needs to check whether a permit is valid, they send:

  • The permit ID in question
  • Their ID (e.g. their badge number – something that can individually identify them)
  • Their council’s access key

to the council that issued the permit (they can read this from the permit). The issuing council would then reply with one of four answers:

  1. We didn’t issue that permit. (It’s probably a forgery.)
  2. We issued that permit, and the permit is valid.
  3. The permit is invalid (it may have just expired — this allows the issuing council to set their own grey-area) so doesn’t confer any rights to disregard parking restrictions.
  4. The permit is invalid and has been reported stolen or withdrawn by the issuer and should be seized.

The parking attendant can then perform the relevant statutory actions.

No personal data needs to be shared between the issuing council and the parking attendant, wherever they are in the country.

Notes

  1. I’m not an expert on parking, permit fraud or enforcement. There may be many reasons why this simple query / answer approach wouldn’t solve the problems with fraudulent permit use. However, this is the best place to start. If people think that a parking enforcement officer needs more information then they should make the case for this. It is always best to share the minimum amount of data necessary to remain compliant with the third (only get and use data you need) data protection principle.
  2. I’ve simplified this discussion to the broad question of data copying, data sharing or my preferred question:response which would share the minimum of personal information. There’s a separate technical discussion about the best way of achieving this, and whether it would be best implemented using public-private key encryption, with a central-key management system operated jointly by all councils. There would be some other issues to explore around how long a key is valid for, and how a local council revokes another authority’s access.
  3. I’d also be tempted to consider whether using near-field RFID chips in the permits would add value to the system and make the permits harder to forge. It would also reduce the frequency of number keying errors by a glove-wearing parking attendant on a cold day, as their terminal would just be able to read the permit ID through the windscreen.

The future of privacy talk at ORG


Bruce Schneier spoke on the subject of The Future of Privacy at the Open Rights Group on Friday. The ORG is the ‘UK equivalent’ of the EFF and I’m proud to be one of its founder members. I’ve heard Bruce speak a few times, most recently at WEIS 09, and I’ve always been impressed at his relaxed presentation style. This was a great event and ORG will be posting has posted a video of the event on its web site. I’d recommend watching the both the presentation and the Q&A afterwards.

UPDATED: Here are the links to the presentation and the Q&A.

A few highlights (with comments):

  • In relation to large government databases, built to facilitate data mining techniques for suspicious activities, Bruce commented that if you’re looking for a needle in a haystack, it doesn’t seem very sensible to add more hay!
  • On CCTV he posited that we’re living in a unique time. Ten years ago there were no cameras, now there are hundreds of cameras and we can see them all, in ten year’s time there will be many hundreds of cameras, but we won’t be able to see any of them.
  • When ‘life recorders’ become widely used (and they’d only need about 1TB a year to record your entire life) he could see that not having an active life recorder would be seen as suspicious — much like leaving or turning off your mobile phone is now presented as “evidence” that you were up to no good.
  • Ephemeral conversation is dying.
  • The real dichotomy is not security v privacy, but liberty v control. He argued that privacy increases power, and openness decreases power. So citizens need privacy and governments need to be open for a balanced democracy to prosper.
  • The death of privacy has been predicted for centuries (for instance, see Warren and Brandeis’ The Right to Privacy published in 1890). Without a doubt privacy is changing and this is a natural process — but it isn’t inevitable. Our challenge is to either accept this, or to reset the balance between privacy and the mass of identity-based data gathered for commercial gain and state security. Laws are the prime way to reset that balance.
  • When asked the one thing he’d like to change, he replied it would be to implement European style data protection legislation (like our own Data Protection Act) in the US.

Abuse of radio buttons and check boxes

I’m particularly sensitive to interface design and I saw a real horror this week. [The] BCS1 recently conducted a members’ survey. Question six managed to break the long established model of radio buttons (select one) and check boxes (select more than one).

I guess they wanted to make sure that people had answered the question so required a ‘none’ option. If you selected this radio button it used some JavaScript to clear any of check boxes you’d previously selected.

One of the best bits of interface design advice I ever heard was from Jakob Nielsen. In his list of Top Ten Mistakes it is number eight.

Consistency is one of the most powerful usability principles: when things always behave the same, users don’t have to worry about what will happen. Instead, they know what will happen based on earlier experience. Every time you release an apple over Sir Isaac Newton, it will drop on his head. That’s good.

The more users’ expectations prove right, the more they will feel in control of the system and the more they will like it. And the more the system breaks users’ expectations, the more they will feel insecure. Oops, maybe if I let go of this apple, it will turn into a tomato and jump a mile into the sky.

It’s important that any application or website uses mental models that people are familiar with. In security you’re often asking a critical question, and that’s all you want the user to think about, not a newly invented or misapplied design metaphor.

1Formerly the British Computer Society, it has recently become “bcs – The Chartered Institute for IT” and is no longer referred to as “The BCS”.

An analysis of the T-Mobile breach

There’s been a lot in the press for the past few days about the recent T-Mobile breach. Basically it appears that a number of staff at the mobile phone company have been selling customer data which included the customer’s name, their mobile number and when their contract expired. There hasn’t been a great deal of information about this other than the BBC’s report, the Information Comissioner’s press release (PDF) and a short post on T-Mobile’s customer support forum.

From an information security and Data Protection Act compliance perspective there are three breaches of the Act.

T-Mobile

There’s no information how the data was extracted from T-Mobile’s system and I accept that it could have been by people copying the information down onto pieces of paper, however I’ll assume that as the BBC story talked about “millions of records from thousands of customers”, there was a bulk extract of data.

T-Mobile is probably in breach of the seventh principle in that they failed to ensure:

“Appropriate technical and organisational measures shall be taken against unauthorised or unlawful processing of personal data”

It is a breach of section 4(4) of the Act if a data controller fails to comply with the data protection principles in relation to all personal data, and the Information Commissioner (for the moment) can commence enforcement proceedings against the company, in the course of which T-Mobile will have to undertake to implement better security and processes.

However what’s interesting to me is whether T-Mobile had ever properly quantified the commercial value of information about a customer’s name, mobile and contract expiration date? And if so whether this was adequately reflected in their risk analysis?

If this were the case then two technical steps I’d expect them to have taken would have been:

  1. to make it very hard for people to run and save a report that had more than (say) 20 such records (most people working in customer service wouldn’t even need this many records in a report)
  2. to implement some Data Leakage Prevention (DLP) technology that looked at the type of data moving out of the organisation in email, on removable media such as CDs, USB sticks and as physical printouts

The employee / employees

The employee(s) [the T-Mobile site now appears to indicate that it was just the action of a single employee] have committed a clear offence under Section 55(1) of the Act.

“A person must not knowingly or recklessly, without the consent of the data controller obtain or disclose personal data or the information contained in personal data”

If convicted they’ll receive a maximum of a £5,000 fine (and if the Information Commissioner gets his way then next year this could be a custodial sentence).

The data recipient

The person buying the data has also committed a Section 55 offence as they obtained the data without T-Mobile’s consent.

The identity of the person or company who purchased the data hasn’t been made public. It will be interesting to see whether it was a small phone dealer, a broker or one of the other big mobile phone companies. If the latter the there’s a real issue to explore – was this the action of a ‘rogue’ salesperson or something that was tacitly condoned by the organisation?

For a market to exist in personal data there has to be both a buyer and a seller, and the value of the data is defined by the buyer: if no one wanted to buy this information then the T-Mobile employee(s) wouldn’t have stolen it to sell. If the data was traded through a list broker then still the recipient organisation should have asked themselves where this data came from as alongside the section 55 offence they will have breached the first (be fair when you get, use and share data) and second (tell people what you will do with their data, do nothing more) data protection principles.

When this case finally comes to court I’ll be really interested to see the action taken against the purchasers of the personal data.

In the future I expect to see all databases that hold personal information equipped with full read auditing which would create an audit log entry whenever a user read an individual record, or ran a report that included that record.

Audit: User JohnDoe viewed this record at 10:23 on 22/10/09
Audit: User JaneDoe included this record in the report CustomersAboutToLeave at 19:47 on 23/10/09

I’d also expect mobile phone companies to correlate the read activity of their users (recorded in this type of audit log) against the customers who went elsewhere at the end of their contracts.

New data security law book launched

On Monday I had the pleasure of attending the launch of Stewart Room’s new book ‘Butterworths Data Security Law and Practice’. Stewart wrote the definitive guide to the Data Protection Act for techies, the equally snappily-named Data Protection and Compliance in Context. This is also the course book for the ISEB Practitioner-level certificate in Data Protection.

Stewart’s new book is – as he admitted – elephantine in its size and coverage  (for comparison it’s physically larger than Ross Anderson’s Security Engineering). It is the first book that addresses infosec and law and I’m really looking forward to getting hold of a copy. I had a chance to browse one of the display copies at the launch and it looks really useful.

With probably about a hundred infosec and law professionals in the same room the conversations were really engaging. There was a lot of talk about the prominence of breaches in the news, especially after last week’s T-Mobile revelations along with the ongoing consultation on the Information Commissioner’s new powers. A few of the people I spoke to were curious to see what changes there would be in non-financial services companies once the Commissioner had levied his first sizable fine.

Yet another meaning for C, I and A

Yesterday I heard Andy Smith, the Chief Security Architect for the Identity and Passport Service (IPS) speak at the BCS Central London branch meeting about the security behind the new National Identity Register which supports the National Identity Card.

On one slide he highlighted what he considered the three biggest threats to Information Security:

  • Complacency
  • Apathy
  • Inattention (Andy called it Human Error, but I hope he’ll excuse my re-wording to fit into the familiar triad)

So now there’s three security meanings for C, I and A.

  1. Confidentiality, Integrity and Availability : The original
  2. Common Sense, Intent and Application : Plan on doing sensible things well, and keep doing them
  3. Complacency, Inattention and Apathy : It is really hard for humans to do security things 100% of the time

Andy’s presentation was really interesting and I’m glad to have had the opportunity of hearing his views, but in my view the session failed to address the publicised topic of “ID Cards: The end of the Private Citizen – or good corporate ID management?” There wasn’t a speaker to address whether this was the “end of the Private Citizen” and questioners were discouraged from being “too political”. As IT professionals it is really important we participate in the debate about state-wide databases and the consequences of insecurity and secondary uses. That’s not a political discussion, but a socio-technical discussion about the future application of technology. The UK chapter of the ISSA held a similar event in July this year which included former home secretary David Blunkett, a speaker from the Home Office, Pete Bradwell from Demos along side many technical presentations. Perhaps it was the table I was sat on but our discussion ranged widely through technology, security and ethical issues.

At last night’s BCS event I’d have like to have heard Andy talk more about the technical details of how his team resolved some of the many interesting challenges they will have faced over the past few year, especially the architectural solutions and processes devised to maintain separation of duties within the IPS.

As a root identity provider the ID card and the NIR are attractive, however I can’t help thinking of Bruce Schneier’s 2007 essay on The Risks of Data Reuse which ended:

“History will record what we, here in the early decades of the information age, did to foster freedom, liberty and democracy. Did we build information technologies that protected people’s freedoms even during times when society tried to subvert them? Or did we build technologies that could easily be modified to watch and control? It’s bad civic hygiene to build an infrastructure that can be used to facilitate a police state.”

A warning to the serially incompetent and the wicked

At last week’s Data Protection conference the new Information Commissioner – Christopher Graham – made his first public speech. With the title ICO: new powers, new funding and a new Commissioner it was certain to establish the direction we’d see the ICO taking for the next five years. The slides from the speech are available on the ICO’s web site (PDF), and the Commissioner didn’t disappoint.

All organisations need to be aware of the Commissioner’s new powers to fine those that breach the Data Protection Act. These powers come into force in April 2010. The good news is that the Commissioner still wants take a carrot-driven approach and help organisations to do the right thing, the ICO’s first reaction will always be to advise and assist.

However, the Commissioner was clear that he planned to use his new powers. The level of the fine has not been set by Government, and he’s lobbying for fixed fines with a maximum of “hundreds of thousands of pounds”. He anticipated that around 20 organisations – “ the serially incompetent and wicked”  – would feel his stick-based sanction in the first year.

If you’re concerned about how well your organisation complies with the Data Protection Act and how securely you look after the information you hold then there’s no better time for someone to have a look. I offer an integrated Information Security and Data Protection gap analysis that will show you just how well you’re doing, and suggest simple (and often low cost) ways to improve.

If you’d like to find out more then please call me on 020 8144 8456 or contact me. On the other hand if you are incompetent or just plain wicked then watch out — the ICO may still focus on the carrot-driven approach to compliance, but he’s about to get a really big stick that he intends to use.

The Other C, I and A of Information Security

Ask anyone who works in Information Security what the initials CIA mean and they will say “Confidentiality, Integrity and Availability”. These are the three measures used to assess the impact that an unwelcome event would have on an asset.

When I train people, I talk about another more important Information Security meaning of CIA: Common Sense, Intent and Application.

Common Sense

Good Information Security requires everyone to use their common sense. Have you ever wondered why some people have common sense and others do not? Why some users remember strong passwords, and others would think it is OK to use their cat’s name and then write it down on a post-it?

This common-sense-imbalance used to worry me until the day I heard Ira Winkler give a presentation where he argued that “there’s no common sense without common knowledge” and you know, he’s absolutely right.

When users (and sometimes security professionals) do something that’s as far from common sense as can be, I’ve found it’s generally because we don’t share a common knowledge. For instance:

I know it is wrong to write your password down because it allows someone to easily logon to a system and perhaps do bad things while pretending to be you – that’s common sense:
they don’t understand why anyone would ever want to do this.

I know that writing the password to an encrypted file on the CD holding the file devalues the encryption – that’s common sense:
they don’t understand why the data needed to be encrypted in the first place and what encryption means, and decided that the password was less likely to get lost if they wrote it on the CD.

In Information Security we are all guilty of assuming that everyone understands threats and vulnerabilities in the same way that we do; but they don’t, which is why their common sense doesn’t match ours. To develop an instinct for good common sense, you need common knowledge – which means proper education for your users and for the whole of your Information Security team.

Intent

My dad used to have a phrase that really annoyed me when I was a kid. He’d say “If a job’s worth doing it’s worth doing well”—especially when my homework came back with C grades. I’m reminded of this whenever people talk about doing projects or initiatives in Information Security.

My experience is that it is a waste of time to be half-hearted about security. Worse still, it can have the opposite effect to the one you intended.

Take any simple control that’s documented in a process or a policy. If people see it’s not enforced, or has a variable implementation based on someone’s position in the organisation chart, then it sends the message that all controls are optional.

It is better to do a few things well, than lots of things poorly.

Implement security with a positive intent to do it well. If you know you’re going to make a half-baked attempt at a project, pick a simpler project you know you’ll do well.

Application

Much in of what we do in Information Security is dull. Checking, maintaining, documenting, cleaning, auditing, testing – just making sure that what needs to be done is done and done well.

My observation is that people – and especially technical people – get more excited about playing with new things than they do about keeping the old things going. Sure, they might not describe it as ‘playing’ and use works like ‘evaluating’, ‘installing’ or ‘configuring’ but at the end of the day it is the challenge and excitement of learning the new that excites them.

Good security though isn’t always about the new. It’s about doing the tedious stuff well and paying attention to it. It is about:

  • Checking the logs on a regular basis
  • Making sure that the roles defined in the role-based access are correct
  • Doing the lessons learned from an incident and following up the action points until they’re all completed
  • Updating the DR documentation when you change a server configuration
  • Cleaning the backup tape heads and verifying the backup worked properly
  • Filling out the visitor book for the server room
  • Writing the documentation for X before moving on to Y
  • Chasing the last person who didn’t complete the Data Protection training course

It takes application from everyone in the team to keep on top of these and hundreds of other little tasks. It takes application from management to make sure it happens.

So there you have it. My alternative exposition of the CIA triad:

Common Sense: Invest in the security education of users and the IT team

Intent: Plan on doing each security project or initiative well

Application: Keep doing  the dull things

Risk – a book worth reading

For me, one of the best parts of a holiday is the time sitting by the pool or at the beach (and less happily the airport) catching up on some serious reading. If there’s one profession that requires an insatiable capacity to read it is information security. I find it hard to keep up with the constant stream of journals, email newsletters, legislation, reports, blog updates and white papers to read and I don’t find enough time for books. On holiday I leave the laptop at home, and fill a suitcase with my reading pile.

One book I read this summer was Dan Gardner’s Risk – The Science and Politics of Fear. It is a well-written examination of the psychology of risk and fear which really addresses how we perceive risk. Bruce Schnier recommended it a few months ago and I wholeheartedly second his recommendation. A large part of our business is about assessing risk and then communicating that assessment to other people. Risk explains the ways our risk assessments are sub-consciously biased and also explains why people don’t always buy into our risk assessments.

The book has three parts:

In the first, Gardner condenses the academic research into risk perception. He explains the heuristics (rules-of-thumb) and biases that our unconscious mind (Gardner calls this gut) uses to assess risk and how these affect the conscious, reason-based decisions we’d like to think that our conscious mind (head) takes.

The second part of the book examines how governments, corporations and the media have used fear as an influencing tool to take advantage of these unconscious biases.

Finally Gardner looks at how these biases and heuristics affect our modern-day, risk-assessed world in the areas of crime, the environment and terrorism.

Why is this book useful to an information security professional?

All security professionals have to assess risk, it is the essence of what we do. So gaining an insight into how our own risk assessments can be hijacked is really useful. Understanding how other people can (intentionally or otherwise) manipulate our risk assessments and those of our colleagues and managers is even more valuable. Here are a few examples:

  • I don’t know exactly how many new pieces of malware are produced each day, but because I’ve heard the figure 10,000 quite a few times, my personal ‘guess’ will be biased towards 10,000 not 10.This is the anchoring and adjustment heuristic. If you tell me that the real figure is 7,000 I’ll accept it as ‘about right’; if you tell me it’s really 1,000 I probably won’t believe you because my unconscious has anchored on ‘around 10,000’.
  • The more times stories appear in the news about companies losing personal data, the more likely we – and our colleagues – think this will happen in our organisation. This is the availability heuristic at work.  The availability heuristic says that the easier it is for our unconscious mind to recall an example of an event, the more we overestimate the likelihood of that event occurring. The opposite is also covered by the availability heuristic of course –  the harder it is for us to recall an example of an event, the more we  underestimate the likelihood of that event occurring.
  • There’s also the affect heuristic, which describes how people’s assessment of the probability and impact of events is biased, based on whether their gut perception of the event is emotionally good or emotionally bad. Data theft is bad, so the affect heuristic means that our unconscious mind, and those of our colleagues and managers, instinctively overestimates the probability of data theft occurring.

Reading Risk – The Science and Politics of Fear provides information that will help you to be a more complete security professional.  The next time you sit in a meeting arguing that something bad is extremely unlikely to occur you will be able to understand why you’re the only person in the room with that opinion. More importantly you’ll know what to do to get everyone at the table to move to a rational rather than instinctive approach to the risk assessment.