Quantcast

A Study of Clickjacking 4: Summary and Q&A

Adding finishing strokes to his USENIX Security presentation, Lin-Shung Huang provides conclusions drawn from the study and answers questions from the audience.

Summing it up

Summing it up

So, to sum up, we demonstrated new clickjacking variants that can evade current defenses. Our user studies show that our attacks are highly effective and our InContext defense can be very effective against clickjacking. We are currently working with NoScript and PayPal to propose new UI safety browser API – W3C – using some ideas of this work. So, thank you for listening and I welcome any questions.

Questions:

Question: Hi, I like the work and I am just curious about the annotation that you rely on from the server and I am curious how likely it is for an application developer to be able tо decide a priori which elements are sensitive and which aren’t? Because it seems to me that like in Facebook’s case, any link inside my site is valuable therefore it would need this type of protection, but as for ads, well, I’m a little bit less concerned about that 3rd party content, there is no way for me to completely control that, right? But ad fraud is another big class of clickjacking, right? And so, I’m just curious whether we will be able to achieve this sort of annotation throughout the web in anticipation of any kind of future clickjacking attack?

Answer: So, is your question: “How do the servers determine which UIs are sensitive?”

Question: Right, it seems to me that it relies on the knowledge of the threat model, and as the threat evolves it’s possible that servers will not be able to annotate correctly. So clickjacking will continue to take place because this defense mechanism isn’t in place, right?

Answer: I can give some obvious examples of what should be sensitive like Google auth dialog, Facebook ‘Like’ button because it reveals the user’s identity, the webcam access dialog is certainly bad, so, these types of UIs, once the users click on them, they are revealing a lot of information to the site. I probably don’t have a very good answer to how servers decide which elements are sensitive or which are not.

Question: Thank you for the great talk! That was wonderful! I guess I was also wondering about a similar question about marking which elements are sensitive, and I was wondering if you considered other alternatives like maybe trying to automatically infer which elements are sensitive, I don’t know, maybe treating any cross-domain element as sensitive, or did you consider other options like that?

Answer: Not in this work but we can start from checking user click events and especially the clicks that are initiated by users and not initiated by scripts, so those are possibly more likely to be vulnerable.

Question: I was a little curious on the technique of that defense where you said you had to remove the pointer outside of the object and then back in – pointer re-entry. It seems like that’s something that might be confusing to a user because it’s not a requirement they might be familiar with, a necessity to actually move their pointer out of and then back into an interface element. Did you look at the ways that you could make that requirement clear to the users and whether they are confused by the necessity of actually moving their pointer out of the object and back in before it would be active?

Answer: I’m not an expert on that. Some ideas would be just graying it out or showing the countdown timer or something like that to hint the user, maybe there are better solutions to that.

Question: Did you measure explicitly if the users realized right after their click that they might have done something that wasn’t what they wanted to? Was there something after the actual activity where you asked the users, you know, did you really do what you intended or something like that?

Answer: So, in our paper we actually mentioned that we did after-task survey, and we found that those answers were not consistent to what the users did, like even when we asked the user: “Did you click on the Like button?” They clicked on it but they answer they didn’t. Or something like: “Did you intend to click on it?” They clicked on it but they said they didn’t intend. We found it very confusing and so we based our results on what they actually did because we captured how their mouse was moving and we checked that data.
 

Read previous: A Study of Clickjacking 3: Ensuring Visual and Temporal Integrity

Like This Article? Let Others Know!
Related Articles:

Leave a comment:

Your email address will not be published. Required fields are marked *

Comment via Facebook: