Skip to main content

Dodging Battle

Security is obviously on everyone’s mind these days with the media gleefully broadcasting with unfailing regularity yet another marquee name falling victim to a breach. Much energy goes into discussing the latest attacks and protections and responses, peppered with generous doses of doomsday scenarios. The talk is all of battle, of fighting off the evil hacker enemy. It may not be the losing battle that it looks like at the moment -remember the war against viruses once looked similarly bleak - but its certainly not looking easy.

As the great Sun Tsu probably said, the greatest victory is avoiding battle. And even in this cyber-battle, there are a few ways to do that.

Tokenize internally

Dealing with sensitive information is a necessity, but there are ways to make it less sensitive. The most effective - mask sensitive data at source and keep it masked all the time. We’ve been doing this to passwords for years but have never gone beyond it. If you store sensitive information in the clear, if applications can read and manipulate sensitive data then its only a matter of time before someone figures out how to extract it by getting past your defences. For years, our most sensitive data were passwords; we learned to converted it into a one-way-hashed token once and then allow both people and applications access only to the hash because some bright brain figure out that knowledge of the actual value of the password was not needed, only a way to compare the hashes reliably. The “one-way” part of the hash ensures that one cannot reverse the process and extract the original password value from the hash. With cloud applications and OAuth we went even further, distancing the tokenisation itself from internal applications. Yes, we still need to protect the hash, but its a lot harder to extract useful information from a compromised one-way hash, making it less valuable and hence less likely to be targeted. It can even be sent abroad without setting off any legal issues of cross-border privacy, since the token isn’t the actual protected data.

Take, for instance, a person’s PAN number. Today our internal databases store the PAN, and applications use it. Yet, we can replace the PAN with any ten-character hash and let systems store and use that hash instead. Most applications will not need any change - they only need to compare PAN numbers with other PAN numbers to see if there are matches, and that can work just as well with the hash. Only some will (such as those that try and validate the PAN, for instance); those applications can be reworked. As a first stage, the tokenisation can happen inside the boundary of a company - the first time the company receives an input - but can later move to an external service (even the IT department itself, maybe). This reduces the number of systems that need bulletproof protection, allowing a company to focus (always finite) resources into fewer places.

If you don’t have Helen, people won’t be so interested in invading Troy.

Encrypt with the receiver’s key

Every company sends lots of stuff to other companies; either message streams or files. The standard procedure is to encrypt them at rest as part of the underlying file system, and in transit by encrypting the channel. However, the file or message stream is nevertheless readable by applications and people within the company itself, and thus remains a target for theft. Companies spend a lot of time and effort getting over this weakness, such as private connections, multiple levels of access, etc.

Lets think of it another way. For years, we have had PKI - a way to encrypt something that can only be decrypted by the receiver. Anyone intercepting the message or file would not be able to do anything with it. We use SSL to encrypt the transmission channel, which uses the very same PKI. Now lets apply the same principle, not to the channel but to the messages themselves. We encrypt each message with the receiver’s public key; in that case only the receiver can decrypt the message. So if Company A is sending it to Company B, A encrypts the message using the public key of B, and B decrypts it using his private key. This is done for each message. The same can be done with files.

The effect of this is quite interesting. The encrypted packets or files can now not be read by anyone - hacker, middleman, even by employees and systems of Company A (so no point bribing anyone). If the encryption happens at the time of generating the packet or file itself, then the bare data is never on Company A’s system. Company A does not have to go to any great lengths to protect the encrypted packet or file from being stolen, since there is little value in it. One can even store it openly on public networks. This is very different from encrypting the channel, where hacking in at either end or as a man in the middle can yield huge dividends. The only way to hack this is to steal Company B’s keys, which is security that Company A is not responsible for.

The reverse of this technique is used quite often - that of signed requests. Every Aadhar request, for instance, has to be signed by the sender. The encryption process and load is exactly the same; only the intent is non-repudiation rather than security. And the technology is obviously quite feasible.

No one will battle to steal a locked box if one cannot open it.

Move your perimeter

The big challenge of perimeter security is that you have to let legitimate outsiders in while at the same time preventing malicious ones. The more the need for porosity, the bigger the chance that some bad actor will slip in. What if we did not have that problem? What if we could move all external interaction to a border far away, so that even if it was breached there was no way back from the border to the core? Chinese Emperors survived assassinations for centuries by allowing no one into the forbidden city - all external interaction was with interlocutors.

This is the easiest implementation of the idea - trusted intermediaries who handle all external interactions. The intermediaries are known beforehand, and will typically be much smaller than the total number of external actors a company has to deal with, allowing for concentration of security procedures. Data interactions - both messages and files - can go through this intermediary which acts as a protocol break. The air gap between the real perimeter of the company and the perimeter exposed to the outside participants is what keeps the company’s perimeter safe. The company sends the data to the intermediary, and whoever is supposed to receive it is told to approach the intermediary and collect it. There’s a matter of managing the access rights of the external partner to the intermediary, but this is no more than managing the same rights on one’s own perimeter.

In today’s world, the most effective outside perimeter is the public cloud. Done properly, public cloud is quite difficult to attack, but is also completely isolated from a company’s own network. The company should use API calls rather than networking protocols to copy data onto the cloud, thereby ensuring that only one way communication happens. In addition, companies can use private, direct lines onto the cloud so that even the API calls (to - lets say - Amazon S3 storage) does not require a company’s servers to connect to the Internet. Of course, it need not be a public cloud provider - it can be any intermediary that implements a one-way protocol.

In conjunction with the previous two techniques, this shrinks the attack surface even more dramatically and makes the company a very unattractive target. There’s no point attacking the perimeter if the actual defender is somewhere else.

Comments

Popular posts from this blog

Rethinking Disaster Recovery

Disaster Recovery has been on the minds of companies ever since the early days of commercially available computing. Today's world of DR revolves around four acronyms - BIA (business impact analysis), RPO (recovery point objective), RTO (recovery time objective) and BCP (business continuity plan). The acronyms appear in a disaster recovery plan in roughly in that order, the thinking being that you first analyse the impact to business of systems being down, then figure out how far back in the past are you willing to turn the dial back to recover from (last day, last hour, last millisecond). Next focus on how long you can afford to be down. Finally - buy a boatload of hardware, software and services to convert all this into action. Setting up a DR is a hugely expensive affair that takes a significant amount planning and effort, not to mention all those drills and tests every now and then. CTOs have followed this prescription since the late seventies (apparently the first hot site wa

Outsourcing III–The "Who" Question

A little while ago, I was asked to give a presentation to CEOs on outsourcing. The audience wanted to know about adopting outsourcing for their companies; making use of its promise while avoiding its pitfalls. It seemed to me (unimaginatively, I must admit) that the whole thing boiled down to four fundamental questions - the why , the what , the who and the how . I decided to expand the presentation into a series of blog posts, one per question. The Who Question Once you've clarified why you're looking for an outsource partner and also which pieces to outsource, you're faced with the next big question – who? What should you look for in your potential outsourcing partner? The choice, I put to you, comes down to four linked characteristics. Ability The first characteristic, of course, is ability. A vendor cannot be under consideration at all if the basic ability to handle whatever you plan to outsource is not present. This is not always an easy thing to judge, especi

Outsourcing I–The "Why" Question

A little while ago, I was asked to give a presentation to CEOs on outsourcing. The audience wanted to know about adopting outsourcing for their companies; making use of its promise while avoiding its pitfalls. It seemed to me (unimaginatively, I must admit) that the whole thing boiled down to four fundamental questions - the why , the what , the who and the how . I decided to expand the presentation into a series of blog posts, one per question. The Why Question Why outsource? Given that a trillion-dollar industry has crowded a lot of people into Bangalore and made more than one driver rich, it seems a little late to ask this question. However, this isn't really about outsourcing being good or bad per se. Bloggers like us love to wallow in theoretical questions; companies usually want answers to more prosaic stuff. The question really is, why should a company be looking for an outsource partner ?   I've divided the universe into two simple flavours – Tactical and Str