Skip to main content

Layer Cake

The three-tier architecture has dominated application deployment for decades. Presentation Layer, Logic Layer, Data Layer - every application designer is fed this bag of chips again and again, till every other chip is forgotten. Most companies have rigid policies that ensure that this holy trinity of layers is hammered into every policy document and SOP.

But...

As is true of so many religions, this one is also a bit outdated (Microsoft has even discontinued that page in their architecture). The theory was that many presentation layers could reuse the same business logic (encapsulated neatly in a application layer) but in my many years of slaving at these things I've rarely seen an app layer used by anything more than a single presentation layer as part of a single application. Indeed, this model was created for the client-server world and should have been thrown out when the world moved on but it was what everyone knew in those days so it somehow made it to the web world. A whole rationale was created to retrofit that design. Presentation became Web, Logic became App, Data remained Data and three tier became three-layer. The roles changed a bit too - Web layer was to stick to static web serving, App became the big daddy doing business logic but now also UI interactions, while Database remained where it was.

As long as things were small in scope there was not much pain involved, but seams started stretching quickly as sites started doing really big things. The first big casualty was scalability. Lets see how.

Web applications are supposed to have horizontal scale with hundreds of small front-end servers; this sounds fine in the three-tier till you start working out the details. Lets say the standard farm started with 2 web, 2 App and 2 Database servers. Each Web server connects to each App server, leading to a total of four connections. The same four connections are required between App and DB, so a grand total of eight connections. However, these 8 connections are actually giving you a horizontal scale of only 2 - since the consumer has only two entry points to be used with a load balancer. Now say you want a hundred entry points to be load balanced - that's a hundred web servers. If you limit yourself to two app servers, that's nevertheless 200 connections. Of course a hundred web servers cant be serviced by 2 app servers - say 10 are required. That's 1,000 connections. And on the database side, if say 4 DBs are required thats 400 connections there too, a grand total of 1,400 connections to create, maintain, firewall and secure. As you can see, this becomes quickly unmanageable.

An option is to scale in full modules - ie a hundred web servers connected one-on-one to a hundred app and database servers. This solves your connection problem but pushes your server requirements through the roof, not to mention making stateful applications a nightmare. Of course, given that web servers serving static requests use very little CPU, we can merge Web and App servers (especially since App server also runs a web server anyway) but this will certainly fall afoul of your security policies. You need three firewalls, they will say. Why, because three layers need three firewalls. Sounds a bit circular, but that's what you'll face.

Can security be managed with less than three layers? Of course it can - why should the number of firewalls be dependent on the number of layers. Put all three firewalls on the public edge (or two before the web+app server and one before the database) if you want. The insistence of a firewall between web and app comes from the old separation of presentation and logic servers, where only the presentation layer talked to the logic layer and no direct access was allowed to the app layer. Also, security tended to be layered into zones - all databases in one zone, etc - because firewalls were expensive and limited resources. Today web servers do static things and app servers do dynamic things (using a web server) so the same ports and protocol are open on both web and app servers. The web server merely passes the dynamic request along. If someone can compromise the web server then he can by definition also compromise the web server on the "app" server. Firewalls, meanwhile, are no longer expensive resources; micro-segmentation allows even a single database to have its own personal firewall.

Then there's the small matter of changing things. The three-layer architecture is notoriously difficult to do tiny changes to. Everything requires a full build and thus a full round of testing for the entire app. Want to change a comma? Run a full build and test cycle. Microservices? Sure, just create an application instance each time (did I tell you about the build and test process already?). Look a bit closer, and you can see a clear shadow of the earlier practices of building, compiling and deploying EXE applications. Modern applications are closer to document editing than traditional build-compile-deploy - edit, preview, publish, version - but the shift in paradigms is not an easy one.

Modern web applications are more akin to a network of interconnected processes, some interacting with each other and some with human users. They are not monoliths with a single neat front, middle and back. This makes them far more flexible and scaleable than the 3-layer model. Indeed, the "network of components" model an incorporate some very complex behaviour, where one service comes from one provider in one domain and another from another provider in a different domain. It is not unusual today to have applications where authentication services are provided by Google, storage by Amazon S3, analytics by an on-premise Tableau, maps by OpenStreetMaps, database by Zoho, ticketing from Zendesk, monitoring from and log management by Splunk. Such an app - impossible under the 3-layer model - will usually outperform and outscale traditional models and be cheaper and faster to build to boot.

A few big changes in web applications really push against this three-layer model. The first is the emergence of markup languages where all the work of the application (presentation and logic) are handled by the web server. The original CGI model of the web server handing control to another EXE for logic processing and then receiving the output was thrown out of the window with PHP, which even today runs the bulk of the web (including Facebook). The second was JavaScript and the emergence of the Single Page Architecture, where the entire UI shifted to the client outside the boundary of the company's data center, and the servers became mere vehicles for feeding static objects and JSON data to the client.

The final blow to the coffin are serverless architectures, where micro-services run directly on a PAAS container with neither web nor app nor DB layers visible. Each of these movement have huge advantages over traditional design - markup languages make comma changes a breeze and allow for true DevOps. Single page architecture is often the true decoupling of presentation and app; it has made app development dramatically easier and more democratic (think Facebook apps). Serverless architecture is by far the easiest to scale massively on the cheap. Isolation and micro-segmentation, something public clouds do in their sleep, obviates the need for the expensive zone-based 3-firewall approach companies continue to favour.

For all the proven scale and benefits, only a handful of companies that are not startups make use of these alternatives internally. Why? Because they're still baking the layer cake.


Comments

Popular posts from this blog

Rethinking Disaster Recovery

Disaster Recovery has been on the minds of companies ever since the early days of commercially available computing. Today's world of DR revolves around four acronyms - BIA (business impact analysis), RPO (recovery point objective), RTO (recovery time objective) and BCP (business continuity plan). The acronyms appear in a disaster recovery plan in roughly in that order, the thinking being that you first analyse the impact to business of systems being down, then figure out how far back in the past are you willing to turn the dial back to recover from (last day, last hour, last millisecond). Next focus on how long you can afford to be down. Finally - buy a boatload of hardware, software and services to convert all this into action. Setting up a DR is a hugely expensive affair that takes a significant amount planning and effort, not to mention all those drills and tests every now and then. CTOs have followed this prescription since the late seventies (apparently the first hot site wa

Outsourcing III–The "Who" Question

A little while ago, I was asked to give a presentation to CEOs on outsourcing. The audience wanted to know about adopting outsourcing for their companies; making use of its promise while avoiding its pitfalls. It seemed to me (unimaginatively, I must admit) that the whole thing boiled down to four fundamental questions - the why , the what , the who and the how . I decided to expand the presentation into a series of blog posts, one per question. The Who Question Once you've clarified why you're looking for an outsource partner and also which pieces to outsource, you're faced with the next big question – who? What should you look for in your potential outsourcing partner? The choice, I put to you, comes down to four linked characteristics. Ability The first characteristic, of course, is ability. A vendor cannot be under consideration at all if the basic ability to handle whatever you plan to outsource is not present. This is not always an easy thing to judge, especi

Looking Ahead

I just sat through a presentation about the four big trends in IT that even the polite struggle to describe as dull, but it did get me thinking. Incumbent technology vendors (pretty much like incumbents everywhere) have too much invested in the present and will hence sell incremental as disruptive. IBM once believed sincerely in the future of mainframes, big studios still insist on the glorious road ahead for DVDs now that the laser is blue, predicting the future is always fraught with problems. Armchair sniping is all well and good, but shouldn't I be in a position to predict better? Time to get my somewhat ample behind off the couch then, and stick my neck into the future. Here, I go - my four predictions for IT trends, at least as far as careful, conservative corporates are concerned. Cloud will vaporise IT Today's businesses are all concerned about the technology underlying the cloud, and how to adopt it. However, cloud isn't really a technology - its a method of deli