Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Firefox Privacy IT Technology

Firefox 79 Clears Redirect Tracking Cookies Every 24 Hours (venturebeat.com) 29

An anonymous reader writes: Mozilla today started rolling out Enhanced Tracking Protection (ETP) 2.0 in Firefox. While the company technically launched Firefox 79 for Windows, Mac, and Linux last week, it only unveiled its marquee feature today. Firefox 79 by default blocks redirect tracking, also known as bounce tracking, and adds a handful of new developer features. [...] Since enabling Enhanced Tracking Protection by default, Mozilla says it has blocked 3.4 trillion tracking cookies. But the company notes the ad industry has since created workarounds and new ways to collect user data as you browse the web.
This discussion has been archived. No new comments can be posted.

Firefox 79 Clears Redirect Tracking Cookies Every 24 Hours

Comments Filter:
  • Comment removed based on user account deletion
    • Re: (Score:3, Informative)

      by tdailey ( 728882 )

      How does one block the tracking of tracking cookie blocking?

      FTA:

      "
      Enhanced Tracking Protection 2.0 attempts to address this by checking to see if cookies and site data from those trackers need to be deleted. The feature stops known trackers from accessing your information by clearing their cookies and site data every 24 hours. Because you look like a new user the next time you visit the tracker (after 24 hours), they can’t build a long-term profile of your activity.

      To be clear, Firefox tries not to clear

    • by backslashdot ( 95548 ) on Tuesday August 04, 2020 @03:12PM (#60365969)

      First, they block a block of blocks and then track a track of tracks, following that, they block a track with a track of blocks and track each block to the block of tracks.

  • Make it a configurable option to the point that I can opt to have them cleared every 24 minutes, and we can talk.

    • 24 nanoseconds sounds like something I can accept.

      • by kwalker ( 1383 )

        If you want more control over this, install a cookie-manager add-on like Cookie AutoDelete. Then you can decide which domains are allowed to set cookies and when they should be cleared.

  • Pretty long ago I dd mostly server side web development, but have not done that for a long time...

    Have all of the browser lockdowns of various features impacted web development very negatively?

    Especially for enterprise dev, I always kind of wonder when I read stories like this about more cookies being locked out if this is messing with enterprise development that might be able to make good use of a feature that otherwise is treated as harmful by the outside world.

    • by DrYak ( 748999 ) on Tuesday August 04, 2020 @02:18PM (#60365719) Homepage

      Have all of the browser lockdowns of various features impacted web development very negatively?

      Most of these lockdown only concern abusive patterns that an average web dev doesn't need anyway. (i.e.: they won't a new photo port-folio website that you've written).

      But there are a few gotchas here and there:

      In our specific case, at work, my team and I develop a bioinformatics pipeline for analysis of virus NGS pipeline [github.io].
      To make it a bit easier for the user to interpet the results, our team has recently added visual reports [youtu.be], and to make it more interactive it uses HTML5/Javascript, so they can easily be opened and played with in any standard browser. But these files aren't hosted on some extrenal website, they are directly opened by the user in the output directory (i.e.: they are accessed over file:///, not over https: )

      Modern browsers prevent Javascript running from file:/// to access any extrenal file even from the same directory (probably to avoid a rogue e-mail attachement to steal random files from your drive ?). That force us to package all the tables within the HTML5 file itself instead of using an extrenal JSon or CSV file.

      Modern browsers' protection against XSS also have "same origin" limitation (so that a rogue website will not tap into the API of a different website that hasn't explicitely enabled external calls). HTML files runninig from file:/// are *always* forbidden from sending POST request. This makes it nearly impossible to e.g.: have our report interact with external databases (e.g.: add a button to display a mutation mapped on the protein structure in 3D using Swiss Model [expasy.org]), we need instead to first negociate some GET-based API with the database owners (on of my team member is currently exactly doing that with the colleagues working at Swiss Model)

      But these troubles are pretty minor (we're really in the corner-case territory).

    • Yeah, sometimes. I work on a CRM. Some of our vendor integrations have been a hassle because they've rooted them in some hackiness from a generation ago. For example, after CORS rules started being recognized and enforced, our platform's Salesforce integration had to be updated because they would have an iframe that would embed our platform into it. We had to implement a new login mechanism for the iframe specifically because cookies weren't playing well with that integration anymore. Testing was pretty muc
      • Thanks for the info, that does sound like an especially sucky thing to fix.

        That kind of iFrame trickery was one of the things I was wondering about failing with newer browser restrictions since I had worked on similar integrations in the past, so it's interesting to hear restrictions have had an effect.

        I was wondering if maybe enterprise browsers would be configured to drop some of the restrictions, but I figured probably not as that would weaken eternal security for outside browsing.

        • Yeah, basically. Plus, those kinds of things can only be avoided if the user explicitly disables them. Most of the time, the user isn't aware that those things are even in place to begin with. Luckily, announcements and documentation have been getting betfer when these things do get changed, so there is usually enough information available to figure out how to deal with those updates without too much trouble
  • by Opportunist ( 166417 ) on Tuesday August 04, 2020 @02:14PM (#60365711)

    Create a way to shuffle tracking cookies about from browser to browser. Poison their data pool.

    • by onix ( 990980 )

      I like obfuscation. Assume they are going to track and monitor, and just feed them bogus data. There are just so many ways to do that. Along with pooling cookies and redistributing them, let the browser do a random walk on links, or create a bogus browser personality, that follows all the links associated with a specific geography, etc.

  • Am I the only one who clears all cookies twice a day or has Firefox set to delete all cookies when it closes? Am I the only one who always logs out of any account I logged into (electricity bill, ISP, etc)? Why would you stay logged in forever to a site? That's just asking for trouble.

  • by drew_kime ( 303965 ) on Tuesday August 04, 2020 @02:43PM (#60365829) Journal

    According to their description, when you hover over a link you see one URL, but when you click it the code intercepts the click and routes you through the tracking domain first before forwarding you on to the URL you were expecting.

    How about modifying the browser to ensure that the URL I see when I hover is the one that I go to when I click? Seems a bit better than deleting the cookies after I've already gone to the tracker.

    • by flux ( 5274 )

      Well, if the result is going to be that websites can no longer track where you exit, then the links won't even show the real destination but a service inside the that website that then does the redirection, so they get the data.

      In the end I'm not sure if that's going to be a win.. At least it's obvious to people that the link is tracked, but that's what you knew already. You could mark potentially redirecting links in the hover

      • Well, if the result is going to be that websites can no longer track where you exit, then the links won't even show the real destination but a service inside the that website that then does the redirection, so they get the data.

        I think the site can still track the exit link without killing the redirect. Isn't it typically the ad systems that hijack and redirect the links?

    • Alas, more and more web sites are using Javascript to generate links, so the very concept of hyperlinks is dying. The link isn't generated until you actually click on it. It's also the reason why increasingly you can't right-click and "Open Link in New Window".

      It drives me nuts how broken the web is these days. Fuck application-centric design.

  • Otherwise you can't read a paper online nowadays.

Kleeneness is next to Godelness.

Working...