II - Security Model - What CAcert did to deal with the Secret Cells

Is the secret cell a concern to CAcert?

This question has to be analysed seriously. In the Assumptions, we assumed that we were a legitimate target, but that doesn't say we should care. If it isn't a danger to us, why bother? Why not encourage these very bright tech people to run our infrastructure?

There are three general motives why we cannot accept the approach. One is simple reputation. We are a security provider, and we have a set of principles that puts our users first. Although intangible, it is pretty clear that our security credibility would be destroyed if it was found out that we had been breached. Our members would feel betrayed. And, in most ordinary senses of the word, they would be right.

The second is purpose. If we could, hypothetically, be clear that the purpose was limited to national intelligence only, and would not effect any of our members, we might be able to construct a model along the lines of 'membership of civil society.' We might do our bit for world peace!

But unfortunately no such constraint can be seriously entertained. Although earlier we had no hard evidence, it was widely suspected that the intelligence agencies were also passing information across to civilian agencies, and even companies. Now, the Snowden revelations make it entirely clear: the NSA product is widely shared with the DEA, the FBI, the IRS, and any two-bit agency that can establish a claim to Homeland Security. Their manual even says they must lie in court about their sources!

And, notwithstanding the circus of angst from the American mainstream media, there is no particular reason as yet to believe that any other country is different, and much to say that some countries are worse: China especially pretends no shame in running the Great Firewall of China, an anti-SSL system.

As the wide arena of policing agencies are also subject to corruption (in the west, the war on drugs is the primary source of corruption) and self-dealing (seized assets are kept by the agencies, same corruption source), and as there are now far too many of them to even list, let alone audit and control, there is no rational, believable or responsible way that we can construct a 'civil society' model.

The third general motive is seen from an international game theory approach. Let's say we hypothetically give the nod (perhaps after we catch them?) to an approach by the NSA. What do we then say to the Russians? Or the Chinese?

Do we trust the Americans because we speak their language? Or, hypothetically, should we let each intelligence agency provide one agent each, and have them govern each other?

Such an approach does not work, as for every intelligence agency with a pet enemy, we have members who might or likely will run into these agencies as bona fide targets. With one agency we'd be unbalanced, and in breach of our discrimination principle (Principles); with all agencies, we'd soon we'd have no members left. Our focus is all the peoples of the world, so we aren't in a good position to pick the good guys.

So we care. Successive boards of CAcert and the critical teams and members who discuss this are all of agreement:

Which can be drawn broadly: we don't permit private data sharing, customer analysis, or similar things either, for obvious reasons.

What did we do about it?

That all said, how did we integrate our knowledge and experiences into a comprehensive approach to deal with this issue? How did we deal with the clear and compelling danger of the intelligence community, knowing that they could bring massive pressure to bear on us; informally, legally and even physically?

We needed to face several attacks or threats:

Recall that we had looked at the original background check and decided to stop it because of lack of procedures and rather damaging questions about its approach. This then left us with a problem, which we turned into an opportunity.

One of our good people sat down, examined all the information we had available, and thought about it. At some point, inspiration struck: we should pass the whole problem across to an Arbitrator.

A little about that: in CAcert, everyone agrees to Arbitration up front, and the Arbitration Act provides for this to be a serious forum. In short, this makes it strong in law, as the Arbitrator is in effect a private judge for our community. Over time, we've discovered that the answer to any intractable problem has become: throw it over to an Arbitrator and see what this formal approach makes of it.

So as an expression of this, the very next time a new critical team member issue arose, this was thrown before the Arbitrator. In that event, not only was the person examined, but also the entire procedure was established.

This founding in Arbitration gave the process the protection of a formal system -- documentation, an independent party, and a ruling of facts. This works across borders, across email, and it importantly neutralises any particular beliefs someone may have about the protection of their own country. An agent of a foreign power may believe that he is lying for his country, but a court of Arbitration does not particularly care; our court gives no permission to a member perjure himself, and it maintains its powers, regardless.

The Arbitrated Background Check, or ABC

Because of this strong basis, we gave it the acronym ABC for Arbitrated Background Check. It always involves an interview, which is documented. Although the nature of the interview is more or less closed & private, you can easily imagine what it is like by researching the security clearance process conducted by intelligence agencies. A core part of it is the discussion of conflicts of interest.

Then, the Arbitrator reviews his facts, maybe researches some more, and makes a recommendation, as a formal ruling. Finally, the information collected is packaged up for later review in case of … complications.

That method has now been rolled out and operated dozens of times. It is harsh, but few have complained afterwards. Some have balked, or backed away of course, and that can be attributed to the harshness of the project.

By passing it across to Arbitration, we covered our first objection, which was that the background check could be strong and invasive, while also being responsible to all parties. But it also covered other bases that became clear in time -- it snookered the sneaky approach:

See where this is going? In order to pass an ABC and hide an involvement with an agency with nefarious interests, the person has to deceive. At the minimum by hiding something clearly requested, which is a deception. More broadly, we are looking at perjury, fraud, espionage, etc, all of which is done before a formally empanelled Arbitrator.

So if there is any chicanery later on, this person will be facing the full wrath of a future arbitrator.

Ha, you might say, what can an Arbitrator do? We cannot put the person in jail, sure. Damage will have been done that is irreparable. But we can attack where it hurts most: publicity. Given the charges, this is a career changing moment, and every Arbitrator knows that the information being collected can be used to the fullest if the mutual covenant of trust is breached (ICWatch).

It's also worth bearing in mind that we already know who the person is, because we have already assured the person several times over. Hence:

Which means that we have the identity information already recorded.

Limitations

Any system should be analysed against two options: doing nothing and doing more (Philby). Here are some of the weaknesses or limitations in the system:

What about our role in ‘Civil Society’ ?

Agencies of intelligence generally have 4 tools: passive signals (listening and open intel), active espionage (spying, black ops), cooperation and legal process. To address straight espionage, we have a Security Policy. However, sometimes it might be felt by an agency that some critical need for cooperation does exist; this is sometimes expressed by the canonical example of “we need to decrypt message in 10 minutes otherwise a bomb goes off” that we frequently see in the media, but rarely if ever happens in reality.

If an agency were to determine such a corporation would be valuable, legal process can be served as with any other person in dispute, and follow the same procedure as with all other difficult issues:

or, "Agencies [are] asked to use the front door in making requests for law enforcement data, and not (as hitherto) steal it... (Campbell)" Further,

Our people are trained in these. Which leads us to a corollary which we can derive from the above, but we've never actually had to write down:

The only way such a thing can happen is: through an Arbitration. And that means the information is gone over carefully, turned into facts, supported by logic, policies and law, and cast into a ruling. And its existence is not secret by various means and mechanisms, although it is plausible and frequent for an Arbitrator to declare the contents ‘’under seal’’. Which leads to:

Our system is built to surface all for open analysis, because that is the only way we can stop ourselves from corrupting our own processes. Every request is reviewable, and to bury something is we hope practically impossible. This raises a serious question as to how we might deal with a request for cooperation that wishes to suppress its very existence. Well, we likely can't do that, which does raise serious questions, as some governments have shown willingness to ‘deputise’ their citizens as unwilling spies if convenient for them. Although we work hard to protect our members, maybe this is one area where our members may indeed be at a risk which we cannot easily mitigate.

Which answers the final question that is often raised: how do we assist the authorities when there is a real need? It is the same answer we provide to all our own people: file a dispute. An Arbitrator can then verify & validate any order, and issue the instructions to proceed, or not.

Why does this work?

Our method solved the personal questions approach, because it gained a protection under the Arbitration process, and because it leant heavily on the concept of conflicts of interest.

One way of seeing our approach is that we took the five assumptions of the secret cell, above, and reversed them. There is no permission, ever, for a government or similar agency to operate a secret cell. An approach would not be secret, and our processes are documented formally for later review. Contenders are required to declare all their Conflicts of Interest up front. They need to describe their backgrounds.

These combined techniques block the secret cell, as we understand it is constructed.

But wait, I hear you say -- what happens if an agent lies? That's what spies do, right?

True. this approach covers the honest approach, and it should be seen that the secret cell approach is somewhat honest - in some sense. But if the agent provocateur simply lies, and is likely to fool us, then we're sunk, right?

To counterbalance this risk, we collect such information as would balance a later discovery that we can. What this means is we really want to know who the person is, which we can do well enough, and what their story is. So at a personal level, we are asking our sysadms to put enough information in our hands such that we have a shared risk. Recall, it is all under Arbitration, and any revelation of later lying would open up the person to the full force of the Arbitrator's actions.

Some closing remarks

Remember, we don't eliminate any of these risks. All we can hope to do, and all that risk management is about, is to reduce the risks to the level where they are cost-effective. For example, if we can knock out 100 approaches, and limit our 'incidents' to one every 10 years, that is a victory! While that is all going on, we are delivering our services to our members.

Also, these techniques don't apply to all. And, our history is unique, so we got there by following our own path. If you happen to work for one of the big names that was named and shamed by the Snowden revelations, this is not an easy step-by-step manual for you.

That said, there is a path in front of all of us. The path starts with us recognising that the danger exists. This first step is now much easier because, post-Snowden, it is now clear to all the world that the intelligence agencies are a real danger to our society. And, with some knowledge and experience, we can state what their preferred approach is, the secret cell.

That gives any organisation a roadmap for two journeys: how to find the secret cells within, and how to stop this approach doing damage to your organisation in the future.

But going on that journey starts with you asking yourself, do you care?

Do you care about the privacy of your users? Do you care that your government has tricked you into lying about what you do with the data? Do you care that more than half the population believes the government did this, while the official corporate position remains "We are not part of any PRISM-related program?" Do you care that you are valid target of espionage? That your compatriots work to defeat your security?

Do you care?


References & Footnotes

(Principles) "Principles of the CAcert Community," CAcert 2007 (http://svn.cacert.org/CAcert/principles.html)

(ICWatch) "Murderous spooks drive journalism project to WikiLeaks," ICWatch 2015 https://wikileaks.org/Murderous-spooks-drive-journalism.html

(Philby) "Trust No One - Kim Philby and the hazards of mistrust." Malcolm Gladwell, New Yorker 2015 (http://www.newyorker.com/magazine/2014/07/28/philby) which is a book review of Spy Among Friends: Kim Philby and the Great Betrayal by Ben Macintyre.

(Campbell) "Talking to GCHQ (interception not required) ...," Duncan Campbell 2015 http://www.duncancampbell.org/content/talking-gchq-interception-not-required

Risks/SecretCells/SecurityModel (last edited 2015-08-15 12:40:19 by SunTzuMelange)