Home  /  Empower and Protect  /  Cyber Reflections: Creating Space for Users to Make Informed Cybersecurity Decisions

Cyber Reflections: Creating Space for Users to Make Informed Cybersecurity Decisions

By Maj. Gen. Paul Capasso USAF (Ret.) •  September 26, 2018
human element of cybersecurity

Could you be responsible for the next data breach within your organization?  Chances are you could be, as statistics continue to highlight the user as the weakest link in cybersecurity. When all it takes is one careless individual, one line of corrupted software code, or one contaminated platform to wreak havoc within the cyber domain, your odds of a breach rise exponentially.

Today, the cybersecurity industry is mainly focused on the use of technology to thwart cyberattacks.  In fact, according to Gartner, “worldwide cybersecurity spending will climb to $96 billion in 2018. “  Despite this herculean investment, we are still falling behind in the game of cyber cat and mouse.

In 1998, former CIA director George Tenet warned, “We are staking our future on a resource we have not yet learned to protect.” This statement is truer today than it was 20 years ago.  If technology alone isn’t the answer, then what are we missing?

Human Behavior and Cybersecurity

In an ideal world, we could completely remove the user from the cybersecurity equation.  Imagine a world of hands-free protection across the entire cyber ecosystem, removing the risk of human error at all stages – from the software developer, supplier, system administrator, and end-user.  Unfortunately, that ideal world doesn’t exist, and there are no simple solutions to the complex and growing problem of data breaches caused by human error.

I have always been intrigued with the human mind and why people behave as they do, especially in the area of security and human-computer interaction. A user base with good cybersecurity behavior could help alleviate the day-to-day issues plaguing the hyper-connected world we live in.

How can we change human behavior so users better understand the risk of online threats, taking the game of chance out of making the right or wrong choice in cyberspace?  One potential solution would be to focus on cognitive ergonomics when we design security solutions, more commonly known as human factors.

Human factors allow us to look at how information in cyberspace is presented and consumed by the user, and what we can do differently to improve the outcomes of the decision-making process.  Since information and decision-making are inextricably linked, information needs to be presented in a way that is understandable, concise, and instills action.

Take the case of system warning pop-ups.  Most are boring and difficult to understand.  They tend to be too technical in nature, lack brevity, and are filled with legal gobbledygook.  Most users do not have the technical knowledge to understand what these warnings mean, so they click right through them.

Implementing a tiered security behavior model would slow down the decision process so individuals are given the space to make better informed decisions.  By maximizing participant involvement through the use of interactive and engaging security controls, user learning and retention would increase by stepping through potential risks and guiding the user to make positive and rational decisions.  Examples could include the use of user acknowledgements paired with taking specific actions, such as answering questions, listening to a short audio or watching a video pertaining to warning displays, cautionary banners, and security notices.

“The act of participation deepens engagement, enhances learning, and accelerates behavior change.”

Combining interactive techniques (animation, graphics, audio, short videos, scrolling text) with easy-to-understand language and eye-catching contextual design improves the user’s ability to make sound decisions.

“Aesthetic designs tend to generate the positive affect, which results in the user’s ability of creative thinking and problem solving when they’re interacting with those designs.”

Since familiarity breeds complacency, warning banners should be changed frequently in their design, verbiage, and user actions to be performed.

As society becomes more connected and more dependent on 1’s and 0’s to handle everyday life, adding a focus on the human element of cybersecurity to today’s technology arsenal may help quell the problem at hand.  We must also remember that users need space and time to make informed cybersecurity decisions.  After all, when it comes to cybersecurity, slow and deliberate is better than fast and unwitting.

Maj. Gen. Paul Capasso USAF (Ret.)

Maj. Gen. Paul Capasso USAF (Ret.)

Maj. Gen. Paul Capasso (Ret.) is the vice president of strategic programs at Telos Corporation. See full bio...

The Empower and Protect Blog brings you cybersecurity and information technology insights from top industry experts at Telos.

2 Comments

  • Wertland says:

    I am not really following how “in an ideal world, we could completely remove the user from the cybersecurity equation. ” Why is it ideal to remove the user? Blaming the user is easy. Making security user-friendly (and not embracing complexity for its own sake) is hard.

    • Paul Capasso Paul Capasso says:

      Thank you for reading and for your comment. I’m sorry if I gave the impression that I was blaming the user. That isn’t what I meant in the passage you quoted.
       
      That statement — “In an ideal world, we could completely remove the user from the cybersecurity equation” — might be likened to the development of autonomous vehicles.
       
      The motivation behind self-driving cars isn’t “blaming the driver,” it’s just a recognition that human error is a significant factor in traffic accidents, no matter how well-designed the dashboard and mechanical systems.
       
      In other words, the goal of driverless cars is to let people get where they want to go, safely and comfortably, without having to worry that anything they might do would cause an accident.
       
      To continue the driverless-car analogy, my “ideal world” statement — “removing the user from the cybersecurity equation” — isn’t blaming the user, it means designing systems and applications so securely that people can use them without worrying that something they might do would lead to an intrusion or breach.
       
      That’s what I meant by “hands-free protection across the entire cyber ecosystem, removing the risk of human error at all stages – from the software developer, supplier, system administrator, and end-user.” (“Hands-free” itself suggests a driverless car.)
       
      But because there’s no ideal world, users do need to make security decisions in the course of their work. Which means security developers do need to take the human factor into consideration and make their solutions more intuitive — or as you put it, “making security user-friendly (and not embracing complexity for its own sake).” In fact, that point was the essence of the rest of my post.
       
      People have better things to do when working with systems and applications than to worry that they’re going to cause a breach. In an ideal world, they wouldn’t have to. But since it isn’t an ideal world, we need to make security as simple and intuitive as we can for them. I think we both agree with that.

Leave a Reply

Your email address will not be published.

eight − 6 =