The web is a system where establishing yourself requires you to worry about all manners of data issues. You must consider and address data security. Greedy goblins will try to find ways to access your most secure information online. You must consider consistency in your data. When you change a value or write a post, who is to say that your data will be stored exactly as you set it or wrote it? And what about other users, should they be allowed to modify existing data? You must consider the general access of your information. If a user wants to view a post or you want to update an existing page, how can you be certain that they or you will get the information when needed? These are all addressed in the data security concept of C-I-A triad or confidentiality, integrity, and availability. Depending on the tools you use, implementing these concepts can be easier.
Amazon Web Services(AWS) advise their best practices in their security document going over system design, user management, data protection, and damage mitigation. There are four areas of concern AWS aims to address. These include accidental information disclosure, data integrity compromise, accidental deletion and general availability. These can be prevented by using strategies such as permissions, encryption, data integrity checks, backups, and versioning.
AWS suggests different levels of key sharing and permissions depending on the level of operation. Simply for infrastructure, the responsibility falls solely onto the developer. They are in full control of what is developed, operations and who gains control. In a container or developed database service, the server mostly handles the key security. It gives permissions based on user identity or responsibility. For a service that is abstracted, most permissions are handled by the server and service itself leaving most of the configuration away from the developer and possible error. AWS also suggests ways to protect data confidentiality that pertains to internal data and data transfer. For internal data, it is recommended to utilize encryption for the data such as Microsoft EFS, BitLocker, SafeNet ProtectV, Linux dm-crypt or TrueCrypt. It protects data further than normal permission access and you can also integrate these keys with management facilities. For data in transit, there are also a variety of tools that can be used to ensure data is secure. But the main idea again is to encrypt the data before transit. AWS recommends establishing virtual private networks(VPNs) and utilize Secure Sockets Layer(SSL)/Transport Layer Security(TLS) administration and be sure to know where the data transits and routes between hubs and nodes. They also suggest using private IPSec connections if available and currently established services if they can be used. For even more confidentiality at the software level, AWS suggests managing security groups, define isolated networks, using Network Access Control Lists(NACLs), host firewalls per instance, create a protection layer to force traffic through a filter and access control in the programs and services themselves. For example, a user wants to look at a file or site that they have read permission access to. They are authorized by the system from the database by their credentials and their connection is not blacklisted therefore they are given the encrypted data and a key. The data is decrypted, and the user now has the file.
Integrity compromises is an issue in which the magnitude can be catastrophic. However, there are ways to ensure that integrity is secure in our systems. AWS suggests, such that in confidentiality, to use permissions which reduces the risk of compromises or deletions. There is also versioning that has a ‘snapshot’ of ever change an object has had in its lifetime which can allow the object to be restored to a previous state. Another concept that is used is Backups which do also help with availability in some services. Backups ensure that if data or logs are lost on one system, another system can come and restore the discrepancies. Lastly, just as discussed in confidentiality, encryption can not only protect against unauthorized priers, it can ensure the data that is sent and stored to be exactly replicated and complete upon request. Continuing from the same theoretical user from before, let say they have no permission to write into the database. They want to save a new file under a different name and delete the previous file. The user passes on their credentials and their connections are not blacklisted. However, the server cannot authenticate their permissions and the requests are bricked. In another theoretical, let us say the user does have write permissions and the request was passed. Later, the administrator notices this in the logs and requires that the file needs to be restored as the user deleted important information. The administrator can either version back to the previous state of the file if it was saved under the same name, which in this situation was not the case, or the administrator can restore a previous backup. Thanks, administrator!
For availability, AWS recommends duplicating data to multiple servers and regions. The user can select which regions the data is available to users. This structure provides a producer to control where the content can be viewed and can protect data in case one of these servers has a catastrophic event where the data would be expunged. This separation of data in regions also can provide customization options tailored to those in those regions. Some critical information in one region may not want to be observed or be widely known in another. AWS recommends usage of their proprietary services such as Amazon S3 and Amazon Dynamo DB for these purposes. AWS also states to be aware of the integrity of data as for some services, there is no protection of deletion or integrity compromises as there is no backlog or backups in some systems. They also recommend dispersing the data between regions as opposed to duplicate to the same region as most systems can survive outages but cannot survive heavy catastrophic disasters. In fringe cases, DoS/DDoS attacks can remove availability from a location. AWS suggests a shield service, such as CloudFront, in which a secondary infrastructure would absorb the brunt of the attack. The user is trying to access the webpage but mean ol’ jokey jester over here is just barraging a single server with his malice. The user can be deferred to another server or the server can deflect the attacks to different endpoints. The user in the end still gets to view the data he requires. Likewise, if the server burns down and a user wants to view his not existent data, his data exists in another server the same as how he left it in his previous session.
Now that wasn’t so hard! Was it? There are many tools out there to handle these functions and integrate nicely with most applications. You should not worry about C-I-A too much, but it is a priority and is fundamental for any individual or organization. Remember to just be careful what you put on the web.