Protection of Information * Summary: the paper surveys techniques and insights on the protection of computer-resident information, i.e. authorization, authentication, and access control. * There's a wonderful glossary. * Levels of protection: no protection, complete isolation, controlled sharing, user-programming sharing controls, tagging of information. Dynamics of use considered important. * Design principles for security: (a fabulous section of the paper!) o Economy of mechanism: keep it simple and small. o Fail-safe defaults: default should be lack of access. o Complete mediation: all accesses must be verified. o Open design: "security through obscurity" doesn't work. o Separation of privilege. o Least privilege: principals should get only the priviliges they legitimately need, and no others. o Least common mechanism: minimize shared mechanism. (?) o Psychological acceptability: users will find a way around your protection mechanism if it is too inconvenient to use. Also, user interface to protection mechanism should roughly match the user's mental image; otherwise mistakes will creep in. o Work factor: security is economics; just have to make the cost of breaking security high enough to deter intruders. o Compromise detection: (aka intrusion detection) analysis of audit logs to detect (rather than prevent) violations of the security policy. * Common mechanisms: descriptions, supervisor mode, hardware access checking. You already know all this stuff. * Authentication: passwords ("something you know"), eavesdropping, physical tokens ("something you have"), bio-identification ("something you are", e.g. fingerprints), server spoofing, cryptography. * Access control lists and capabilities for more flexible protection. (in the paper's terminology, list-oriented vs. ticket-oriented.) * Detailed examples of descriptor-based protection systems. A largely boring & outdated section of the paper; here are the interesting bits: o Capabilities. Can be implemented with hardware support by tagging (or else in software with crypto). Do access checks at initial bind time; then the capability can be used, copied, shared freely. Thus capabilities can be very efficient, since access check need only be done at initial bind time. Disadvantage is that revocation and propagation control gets really hard. o Access control lists. Objects have an ACL associated with them that says who can get what access to the object. Revocation, etc. get easy, but performance suffers since the heavy-weight access check is performed at every access. o Hybrid systems: use access control lists at a high level; principals who pass the ACL check are given a capability. o Discretionary vs. mandatory access control (MAC). Discretionary controls are those the owner of an object imposes on his object; mandatory are those the system imposes on all objects. MAC typically used in military systems: all data has a classification, and no principal may leak Top Secret data to a compartment with lower classification. Lots of surprising theoretical difficulties with MAC; covert channels hard to eliminate. o Protected subsystem: collection of objects, which others can access to only through designated entry points. Allows user-specified programmable access controls. * State of the art: (in those days) o Most systems didn't provide much security, or only recently began worrying about protection. Not much distinction made between hardware architectures and software operating systems. o Research directions: assurance, fault-tolerance, constraints on use of information after release (e.g. MAC), cryptography, authentication.