In the previous part, we discussed the subject of “information assurance (IA)” in the physical context.
The digital context is similar, and this time, I will use e-government as an example. Let us use the scenario where a citizen, Bob, wants to access a government web application to modify the dates where Bob needs to report for jury duty.
We recall the principles of IA:
While the explanations below are not entirely comprehensive, we run through some of the basics behind each of the principles of IA, with some examples that we see in everyday deployments.
In this case, Bob must be able to access the web application. If we follow this principle of IA, some of the following must be present:
- A reliable network. Some critical services employ high availability. This means that the network infrastructure contains redundancy, reducing the likelihood of a denial of service should a component go down.
- Enough resources for the web and database server(s). The server must be able to take connections reliably without dropping them. Sometimes, these applications undergo load tests. Load tests are a form of performance testing specifically designed to understand how an application might behave when subjected to an enterprise-level of loading. Their tasks can be simulated. For instance, a load test could simulate 1000 users simultaneously checking their records, which means 1000 queries, in a short amount of time, to the database.
Integrity and Confidentiality
Say Bob would like to change the date he would like to perform jury duty. After he fills up a form, and clicks “Send”, how does he feel assured that the form is indeed sent the way he desired? How does he feel assured that no one reads his information? Clearly, he does not want a man-in-the-middle to be tampering with or reading his request. That usually means the implementation of a HTTPS connection to the layman. The layman would typically see this as a security lock, showing that the site has been encrypted with a trusted certificate, and at all points of the transmission, he sees the security lock in place.
Of course, the implementation of the HTTPS protocol (over TLS) is a standard for all websites that deal with transactions that are sensitive to interception today. If you see a site that does not have such a lock, force it through HTTPS. If you can’t, your data is probably at risk.
There are other methods to prevent man-in-the-middle attacks, but these will be covered in a later series.
Is Bob really Bob? Could it be Fred, taking Bob’s details and impersonating him? This is where authentication comes in useful. To verify Bob’s identity, Bob typically needs to produce one or more of the following:
- who he is (his identity, such as fingerprint, iris)
- what he knows (his password, some cipher)
- what he has (an OTP token, a smart card)
Each of the following is what we consider a factor of authentication. Not all factors of authentication are equal.
While “who he is” is certainly harder to impersonate than “what he knows”, Bob can’t possibly be presenting his fingerprint or iris scan to log into a web application; he may simply lack the hardware to do so. Hence, for most web applications, the most common first factor of authentication is a username and password. Is this good enough?
Well, people do use weak passwords, such as P@ssw0rd. Amazingly, while such a password is weak, it satisfies all the password complexity requirements modern organisations impose. Hence a single factor of authentication for a government website is probably not sufficient. This means, a second factor of authentication is necessary. What should it be?
In ideal cases, the factors of authentication should differ, because it is harder for an adversary to impersonate different factors of authentication. For instance, if the second factor of authentication were to be answering a secret question, such as “What is your favourite car?”, a co-worker may know this in addition to already sharing his password. This does not make for a useful second factor of authentication.
In today’s commercial setting, people typically make use of mobile OTPs, or physical OTPs, owing to the fact that the adversary must know both the password, and be in ownership of the device, before a successful impersonation can occur.
Suppose Bob made a change of jury date successfully, but felt sick, and didn’t turn up for his modified jury duty. He then claims that he never accessed the web application, and that he was being “hacked”. How do we verify his claim and invalidate it?
This is where non-repudiation comes in. How do we verify that, if Bob performed a transaction, he cannot claim to not have done it, and pinned it on some other happening? This is where the concept of signing becomes important. Just as per the physical world, digital transactions can be signed as well. By tagging an authenticated session to the requests Bob makes, Bob cannot deny that he was not the initiator of the changes, unless Bob claims his account was compromised (which then turns the claim into a challenge of authentication).
Non-repudiation is important in transactions (especially financial) because we need to make sure of the source of transactions, and be able to track them, lest there be disputes over the transactions themselves or their nature.
Here we have done, very briefly, a simple example where IA principles are enforced on a simple use case in the digital world. Clearly the measures to maintain IA are far more than HTTPS and digital signing. However, we shall leave this to other portions in the series, where we examine these technologies in slightly greater detail.
While this introduction to the digital world has been somewhat steep on the learning curve towards the end (because of the basic security concepts), we now have a working background of information to look at other topics. We will have a short digression, before exploring the next series.