Trust in the world of identity is a huge priority. Trust from onboarding and registering external users through verification (think assurance levels using identity validation and verification techniques) to creating trust labels for employees to monitor Malicious activities – which are either driven by external threat actors, threat insiders or simply unintentional bad user behavior.
The other side of the coin, of course, is trusting a service provider based on identity – to be trusted with my personally identifiable information?
But I want to focus briefly on identity trust – and there are a few data points we can use for that. First, is the user (and I’ll stick to the identity of people here, instead of non-personal entities, things and services for now) known to the service? By known, we can assume that they have already been registered with the system – either automatically or via self-registration. But at some point, some data was created, stored persistently, and could be used for a reauthentication and trust assessment event.
The other data I suggest we could use is visibility. Now, that might seem like a pretty unusual attribute because it can be a little unmeasurable, but being able to “see” the identity and the transaction or event they’re looking to perform is an important step in being able to respond – with the level of friction at the right time. Security preparation often fails, not because controls are bypassed, but simply because there is no visibility into an event in progress – or a subject-to-object relationship created and maintained.
So what can we do with two data points? Well, a basic two-by-two matrix is a good place to start because we can start classifying some polarized behaviors as follows:
|Identity -v- Visibility||Seen||Invisible|
|Known||1 – Monitor behavior i.e. trust, but verify||2 – Monitor access points, i.e. exit and entry flows|
|Unknown||3 – Adaptive response – apply appropriate friction||4 – Black Swan! – assess the risks|
1 – Known and seen
Let’s start with the simplest combination: an identity is known and the activity is “seen”. This is a typical interaction for authentication or authorization services. The fact that the event is seen triggers a security check – in which case we can borrow zero-trust terminology here and assume that we can “trust but verify” once certain steps have been taken to respond to the event. control requirement. Maybe a connection event or a policy evaluation request.
2 – Known and Invisible
Let’s try another scenario: one where an identity is known but the event they are trying to perform is not seen. By this we assume that the object they are trying to access or the event they are trying to participate in is not under the control of a reference monitor, or may not have been tagged or n did not receive permissions. So what can we do here? Well, the assumption would be that some sort of “meta” monitoring might suffice. So, although a particular user is known, some of the activities they perform cannot be seen. So, by looking at the entry points and, more importantly, the exit points of dataflows or APIs, it is possible to apply certain controls. Think of this approach as the detectors often seen at store doors – the store cannot see every customer-item relationship – but seeks to capture a theft event at a meta level at the exit – i.e. the output filter.
3 – Unknown and Seen
It’s pretty obvious. In this case, we have a scenario where an event is seen (login, authorization request, etc.) but the identity is unknown. So here we have a basic check to authenticate the user (the classic HTTP 401). At this point, we don’t know them, so to complete the authentication event for example, they may need to register for the service in order to create a profile. This registration process will of course contribute to the level of static assurance associated with the identity – what identity data was provided, where was it provided from, how the data was validated and verified, for example. The result of this adaptive response process will be a level of friction appropriate to the transaction being executed.
4 – Unknown and Invisible
So this combination is really the important part of the whole trust discussion. Scenario where an event is not seen and the identity is also not known to the service provider. A classic “I don’t know what nobody’s doing” double negative. I may have jokingly referred to this as a “black swan” event – an event that happens only rarely but has a huge impact. An example scenario might be where an authentication or authorization event has not been verified or has been bypassed entirely and activity against the object is not monitored. The consequences could be devastating, primarily because the impact may not be known long after the event has taken place.
So what are the options? First, it must be recognized that this may well actually be a real-life scenario. Control coverage against all objects may not be complete. And authentication and authorization gates may not always be applicable. But alas, security resources are limited, so an assessment exercise must take place which can help reduce the impact of this or attempt to reduce the likelihood of it happening in the first place. This could involve the standard acceptance, transfer, or avoidance tactics.
In summary, trust is a very transitory but extremely important aspect of the digital identity lifecycle. It should be assigned, updated, managed with context and be dynamic at the same time. It is no longer acceptable to simply assign trust to an identity or transaction as a static, immutable attribute. Trust must be consumed by a range of different actors, systems and services, to deliver personalized experiences, cross-device interactions and large data collaboration ecosystems. Being able to classify identities as known and unknown is a useful and standard approach to being able to respond with appropriate levels of friction and security controls. The concept of visibility is new, but can provide an interesting approach to augment risk assessment methodologies for objects, transactions, events that are not currently considered material to business functions.
About the Author
Simon Moffatt is Founder and Analyst at The Cyber Hut. He is a published author with over 20 years of experience in the cybersecurity and identity and access management industries. His most recent book, “Consumer Identity & Access Management: Design Fundamentals,” is available at Amazon. He holds a graduate degree in Information Security, is a Fellow of the Chartered Institute of Information Security and is CISSP, CCSP, CEH and CISA. His 2022 research journal focuses on “next generation authorization technology” and “identity for the hybrid cloud”.
*** This is a syndicated blog from the Security Bloggers Network of The cybercabin written by Simon M. Read the original post at: https://www.thecyberhut.com/identity-trust-the-seen-and-known-matrix/?utm_source=rss&utm_medium=rss&utm_campaign=identity-trust-the-seen-and-known-matrix