Keyboards and Keywords

Jul 2, 2013 8:00am

The Wikipedia entry for ‘error message’ includes a number of infamous (and confusing) error messages, though it doesn’t include my all-time favorite:

Keyboard not found! Press any key to continue

And no, that’s not an urban legend. While I’m not sure that was the exact wording, I did see more or less that same error message two or three times back in the days when user support was part of my job.

The reason that I was scouring the web for links related to ‘error messages’ and  ‘security alerts’ is this: I happened across an article on the American Psychological Association web site that told me that Gary Brase (a psychologist) and Eugene Vasserman (a computer security researcher), of Kansas State University, have been given a $150,000 grant for research into developing more effective online alerts. I don’t know how many security companies have explored this approach – though I don’t believe for a moment that no security company has ever involved psychologists, focus groups, and ergonomists (amongst others with interest and expertise in aspects of human-computer interaction) in the development of a product and its user interface – but I’m sure we’ve all seen enough in the way of confusing software-generated alerts to agree that some software could do with a little more attention to the HCI dimension. There is a special place in my heart for the sort of alert that we often see along the lines of ‘EICAR test virus not-a-virus detected’.

In fact, while I may be biased – my own academic background was originally in social sciences, computer science being a later add-on – I don’t think that computer security that’s focused entirely on bits and bytes is ever going to solve the world’s problems with cybercrime, cyber-espionage, and all the other cyber-issues du jour. Certainly the kind of security alert that leaves the user wondering “What the heck does that mean? What does the darn thing want me to do?” is failing some kind of usability test.

The APA article includes a couple of examples cited by Brase:

"Do you want to trust this signed applet?" or "This application's digital signature has an error; do you want to run the application?"

Frankly, I’ve seen far more confusing examples guaranteed to have the end user running to the nearest wall to bang his head against it. Such as any message that includes an error code or a hex string, or something like ‘unknown error scanning file [filename]’, or even a blank message box, but these examples do finger an essential problem with security alerts that I’m not sure $150k is going to be enough to fix.

The problem with Brase’s examples isn’t the wording, it’s conceptual. If the algorithm behind the program isn’t able to make a reliable determination of the risk, why should we expect the everyday user to be able to? Actually, he might: maybe he knows that a site is (normally) OK, even if he can’t be sure that it hasn’t been compromised in some way. Software has the disadvantage that it can only deduce intent from the programmatic characteristics of a program, or from automated textual analysis. And while filtering has progressed immeasurably from the days when phrases like ‘magna cum laude’ or the name Scunthorpe triggered porn detection algorithms all over the globe, there are still many contexts where an informed human being can make a better decision than an email or web filter. But ‘informed’ people aren’t the main target for research like this: rather, Brase states that "Good security alerts should be tailored for all types of users, no matter what their intent," which suggests a wide range of skill/knowledge levels, as well as a wide range of target sites. There’s an important point there: I’m in agreement with being in touch with the intent of the user as well as that of the malefactor. In fact, Jeff Debrosse and I wrote a paper a few years ago in which we suggested that security companies could increase their effectiveness by incorporating analysis of the user’s behaviour into the software as well as analysis of programmatic behaviour – Malice Through the Looking Glass: Behaviour Analysis for the Next Decade – though I’m not holding my breath waiting for that approach to catch on. It is one way, potentially, of addressing another of Brase’s points: i.e. that ‘user education has not kept pace with the increasing complexity of Internet transactions.’ That, at least, is perfectly true. I’m all for making computers people-literate (the very apposite title of a book by Elaine Weiss).

The logical flaw here, though, is this: improving the presentation of security alerts won’t make security software (or other software with some security functionality, such as a browser using technology like Google’s Safe Browsing, for example) any more capable of discriminating between human motivation than it already is. That’s not such a negative comment as it sounds: programmatic filters don’t in themselves ‘detect’ malicious intent, but they do reflect the programmer’s understanding of some behaviour – programmatic or semantic – characteristic of malicious intent. But malicious behaviour is not a constant, not static. The average security program is a long way from achieving the same discrimination in analysing textual content that a moderately psychologically-aware human being is capable of.