• Beej Jorgensen@lemmy.sdf.org
    link
    fedilink
    arrow-up
    77
    arrow-down
    6
    ·
    1 year ago

    I’m on the “OK but keep an eye on it” train, here.

    Devs need feedback to know how people are using the product, and opt-out tracking is the best way to do it. In this case, it seems like my personal data is completely unidentifiable.

    I was coding in the IE6 era, so I’d really prefer to not end up in a browser engine monoculture again.

  • Hal-5700X@sh.itjust.works
    link
    fedilink
    arrow-up
    70
    ·
    1 year ago

    To disable it in about:config

    browser.search.serpEventTelemetry.enabled = false

    browser.search.serpEventTelemetryCategorization.enabled = false

  • Blisterexe@lemmy.zip
    link
    fedilink
    arrow-up
    47
    arrow-down
    5
    ·
    edit-2
    1 year ago

    This looks fine, the browser just puts your search into a category like “health” or “tech”, then sends the amount of each category completely anonymously.

    Also, if you’ve opted out of data collection already that setting applies to this too.

    • A Mouse@midwest.social
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      1
      ·
      1 year ago

      I agree. I am someone who values their privacy and often does not like opt-out style analytics however I also know opt-in skews analytics. The way the searches are only categorized, and they are using Oblivious HTTP keeping IP addresses private makes me A-OK with this.

  • not_a_king@beehaw.org
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    3
    ·
    1 year ago

    i know they’re a company and they need to float, but this should be opt in not opt out

  • heavyboots@lemmy.ml
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    3
    ·
    1 year ago

    All we want is 1990s Google, guys. That’s really all we want. None of this AI BS that kind find a country in Africa that starts with a K, just Google without the evil enshitification layer on top.

    • Eager Eagle@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      4
      ·
      1 year ago

      I think people forget how awful google pre ~2008 was. Not in terms of the bullshit they do nowadays, just in quality of results really.

      • heavyboots@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Huh. I used it pretty much since the start and I certainly don’t recall it being that bad? Like you got a lot of relevant content up front usually.

        • notfromhere@lemmy.ml
          link
          fedilink
          arrow-up
          4
          ·
          1 year ago

          I feel like you had to learn how to use it, operators and phrasing etc. They dumbed it down with search suggestions and even further by changing search terms to synonyms, and now outright ignoring terms. Height of Internet search was definitely pre 2008. More like 2005.

        • Eager Eagle@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          If you had the right query, yes. But getting there if you didn’t know the exact words in the website used to take a number of attempts and google-fu. By early 2010s this was vastly improved.

      • anachronist@midwest.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        I switched from Alta Vista at Google in the early 2000s because the Alta Vista index was stale and full of spam. Google search tools were comparatively primitive (av let you do things like word stem search) but the results were really good.

  • katy ✨@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    25
    arrow-down
    3
    ·
    edit-2
    1 year ago

    firefox develops an optional predictive search feature like every other search engine and browser has that actually protects user privacy that can easily be turned off so naturally the internet loses their mind over it and declares firefox dead.

    • refalo@programming.dev
      link
      fedilink
      arrow-up
      7
      arrow-down
      3
      ·
      1 year ago

      don’t worry, it’s balanced out by the every other day threads of firefox shills screeching about how much more private it is and how it uses so much less ram.

      people never want to admit that things aren’t black and white.

  • antler@feddit.rocks
    link
    fedilink
    arrow-up
    18
    ·
    1 year ago

    As much as I hate to say it, Firefox is a privacy mess.

    Pocket and Fakespot have very bad privacy policies. The Windows version has a unique Mozilla tracker if you download the installer from the website, and the android version has Google Analytics built in. The existing and new telemetry is a but heavy, but it’s anonymised so it’s really the lesser of the various evils.

    My recommendation is LibreWolf & Fennec as alternatives.

  • TheFeatureCreature@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    2
    ·
    1 year ago

    The important part that you should know (and should already be using):

    Remember, you can always opt out of sending any technical or usage data to Firefox. Here’s a step-by-step guide on how to adjust your settings.

  • TCB13@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    6
    ·
    1 year ago

    Innovation and privacy go hand in hand here at Mozilla

    As well as profits and corporate interests.

    People speak very good thing about Firefox but they like to hide and avoid the shady stuff. Let me give you the un-cesored version of what Firefox really is. Firefox is better than most, no double there, but at the same time they do have some shady finances and they also do stuff like adding unique IDs to each installation.

    Firefox does is a LOT of calling home. Just fire Wireshark alongside it and see how much calling home and even calling 3rd parties it does. From basic ocsp requests to calling Firefox servers and a 3rd party company that does analytics they do it all, even after disabling most stuff in Settings and config like the OP did.

    I know other browsers do it as well, except for Ungoogled and because of that I’m sticking with it. I would like to avoid programs that need no snitch whenever I open them. ungoogled-chromium + ublock origin + decentraleyes + clearurls and a few others.

    Now you’re free to go ahead and downvote this post as much as you would like. I’m sorry for the trouble and mental break down I may have caused by the sudden realization that Firefox isn’t as good and private after all.

  • onlinepersona@programming.dev
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    11
    ·
    1 year ago

    To improve Firefox based on your needs, understanding how users interact with essential functions like search is key.

    Buddy, I just want to type a search term and get results. Stop spying on my search. Your only job is to transfer it to the server and then present the result. I don’t need you to suggest some bullshit to me, or think of “ways to improve search”.

    This helps us take a step forward in providing a browsing experience that is more tailored to your needs, without us stepping away from the principles that make us who we are.

    No. What the fuck? They are sounding more and more like Google. We need a new alternative that isn’t built from Gecko or Blink or whatever the engines are called.

    Anti Commercial-AI license

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      9
      arrow-down
      2
      ·
      1 year ago

      Buddy, I just want to type a search term and get results.

      Telemetry can help them do better at providing that. Devs aren’t magical beings, they don’t know what’s working and what’s not unless someone tells them.

      • onlinepersona@programming.dev
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        2
        ·
        1 year ago

        That’s like saying the window pane between me and the teller has to understand the conversation and dynamically modify the light between him and I. The window pane’s only job is to let light through. Keep it at that.

        Anti Commercial-AI license

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          5
          arrow-down
          2
          ·
          1 year ago

          No, this analogy would make more sense if it was a matter of recording a large number of interactions between customers and tellers to ensure that the window isn’t interfering with their interactions. Is the window the right size? Can the customer and teller hear each other through it? Is that little hole at the bottom large enough to let through the things they need to physically exchange? If you deploy the windows and then never gather any telemetry you have no idea whether it’s working well or if it could be improved.

          • onlinepersona@programming.dev
            link
            fedilink
            English
            arrow-up
            5
            ·
            1 year ago

            You’re describing telemetry to improve the overall performance of the window. That’s very different from what Mozilla: listening in to what is sent between the teller and I. They even gave an example of a trip to Spain and recording it as travel. That’s going way beyond the performance of a window. The teller is probably already doing that. The window operator has no business listening in on that discussion nor recording even a summary of details of the discussion.

            Anti Commercial-AI license

            • FaceDeer@fedia.io
              link
              fedilink
              arrow-up
              5
              arrow-down
              3
              ·
              1 year ago

              The analogy isn’t perfect, no analogy ever is.

              In this case the content of the search is all that really matters for the quality of the search. What else would you suggest be recorded, the words-per-minute typing speed, the font size? If they want to improve the search system they need to know how it’s working, and that involves recording the searches.

              It’s anonymized and you can opt out. Go ahead and opt out. There’ll still be enough telemetry for them to do their work.

      • Zaktor@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        1 year ago

        Telemetry doesn’t need topic categorization. This is building a dataset for AI.

          • Zaktor@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            edit-2
            1 year ago

            The example of the “search optimization” they want to improve is Firefox Suggest, which has sponsored results which could be promoted (and cost more) based on predictions of interest based on recent trends of topics in your country. “Users in Belgium search for vacations more during X time of day” is exactly the sort of stuff you’d use to make ads more valuable. “Users in France follow a similar pattern, but two weeks later” is even better. Similarly predicting waves of infection based on the rise and fall of “health” searches is useful for public health, but also for pushing or tabling ad campaigns.

  • aseriesoftubes@lemmy.world
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    1 year ago

    Here’s the current list of categories we’re using: animals, arts, autos, business, career, education, fashion, finance, food, government, health, hobbies, home, inconclusive, news, real estate, society, sports, tech and travel.

    No pr0n?

    • interdimensionalmeme@lemmy.ml
      link
      fedilink
      arrow-up
      6
      arrow-down
      2
      ·
      1 year ago

      I want an open source AI to sort my tabs and understand them and answer my question about their content. But locally running and offline

      • Zaktor@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        1 year ago

        Unless they’re going to publish their data, AI can’t be meaningfully open source. The code to build and train a ML model is mostly uninteresting. The problems come in the form of data and hyperparameter selection which either intentionally or unintentionally do most of the shaping of the resulting system. When it’s published it’ll just be a Python project with some magic numbers and “put data here” with no indications of what went into data selection or choosing those parameters.

        • interdimensionalmeme@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          I just want a command line interface to my browser, then I’ll tell my local mixtral 8x7B instance to “look in all my tabs and place all tabs about ‘magnetic loop antennas’ in a new window, order them with the most concrete build instructions first” 100% open source model. I’m looking into the marionette protocol to accomplish this. It would be nice if it came with that out of the box.

          • Zaktor@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            1 year ago

            What does “open source” mean to you? Just free/noncorporate? Because a “100% open source model” doesn’t really make sense by the traditional definition. The “source” for a model is its data, not the code and not the model itself. Without the data you can’t build the model yourself, can’t modify it, and can’t inspect why it does what it does.

            • interdimensionalmeme@lemmy.ml
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              I think the model can be modified with LoRa without tge source data ? In any case, if the inference software is actually open source and all the necessary data is free of any intellectual property encumberances, it runs without internet access or non commodity hardware.

              Then it’s open source enough to live in my browser.

              • Zaktor@sopuli.xyz
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                1 year ago

                You can technically modify any network weights however you want with whatever data you have lying around, but without the core training data you can’t verify that your modifications aren’t hurting the original capabilities. Fine-tuning (which LoRa is for) isn’t the same thing as modifying a trained network. You’re still generally stuck with their original trained capabilities you’re just reworking the final layer(s) to redirect/tune it towards your problem. You can’t add pet faces into a human face detector, and if a new technique comes out that could improve accuracy you can’t rebuild the model with it.

                In any case, if the inference software is actually open source and all the necessary data is free of any intellectual property encumberances, it runs without internet access or non commodity hardware.

                Then it’s open source enough to live in my browser.

                So just free/noncorporate. A model is effectively a binary and the data is the source (the actual ML code is the compiler). If you don’t get the source, it’s not open source. A binary can be free and non-corporate, but it’s still not source code.

                • interdimensionalmeme@lemmy.ml
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  1 year ago

                  I mean, I would prefer a data set that’s properly open, “the pile” laion, open assistant and a pirate copy is every word, song, video ever written and spoken by man.

                  But for now I’d be happy to fully control my browser with an offline copy of mixtral or llama