• 4 Posts
  • 35 Comments
Joined 2 years ago
cake
Cake day: June 5th, 2023

help-circle

  • That’s going to change in the future with NPUs (neural processing units). They’re already being bundled with both regular CPUs (such as the Ryzen 8000 series) and mobile SoCs (such as the Snapdragon 8 Gen 3). The NPU included with the the SD8Gen3 for instance can run models like Llama 2 - something an average desktop would normally struggle with. Now this is only the 7B model mind you, so it’s a far cry from more powerful models like the 70B, but this will only improve in the future. Over the next few years, NPUs - and applications that take advantage of them - will be a completely normal thing, and it won’t require a household’s worth of energy. I mean, we’re already seeing various applications of it, eg in smartphone cameras, photo editing apps, digital assistants etc. The next would be I guess autocorrect and word prediction, and I for one can’t wait to ditch our current, crappy markov keyboards.






  • [Meta] I don’t think there’s a need to cross-post this within Beehaw. Beehaw is low-activity as it is (in terms of new posts) so most people here would be just browsing new/local so they’d be seeing this post in their feed twice.

    Even if you’re not browsing by local, most people in this community would likely also be subscribed to the Technology community as well, so again, there’s a double-up.





  • From @SuperIce@lemmy.world:

    If the PoS supports tokens, it’ll use unique tokens for each payment. If the PoS doesn’t support tokens, the phone has a virtual credit card number linked to the real one, so if it does get stolen, you can just remove the card from your Google Wallet to deactivate it. Your real card number is never exposed.

    Even then, credit card numbers on their own aren’t that useful anymore. Any online payment needs the CVC and PoS devices usually require chip or tap cards, which don’t use the number. On top of that, credit card companies have purchase price restrictions when using swipe because of the security risks vs chip (which is why most PoS devices don’t support swipe anymore).



  • I did the TV -> projector swap last year, got myself a 4K projector that sits above my bed and projects a massive 100" image on the wall opposite my bed, and it’s awesome. I’ve got my PS5 and Switch hooked to it, and I’m currently living the dream of being able to game and watch movies on a giant screen, all from the comfort of my bed. Some games really shine on such a screen and you see them in a new light, like TotK, Horizon series, Spiderman etc and it’s 100% worth the switch, IMO.

    Now I also have a regular monitor - a nice low latency QHD 16:10 monitor with HDR, hooked up to my PC, which also uses a 6600 XT btw. Main reason I use this setup is for productivity, running some PC games that don’t have console equivalents, plus the colors look much nicer compared to my projector. Maybe if I bought a laser projector and had one of those special ALR screens I could get nicer colors, but all that is way beyond my budget. Although these days I’m not on my desktop as much as I used to be (I also have a Ryzen 6000 series laptop that I game on btw), I still like my desktop because of the flexibility and upgradability. I also explored the option of switching to a cloud-first setup and ditching my rig, back when I wanted to upgrade my PC and we had all those supply chain issues during Covid, but in the end, cloud gaming didn’t really work out for me. In fact after exploring all the cloud options, I’ve been kind of put off by cloud computing in general - at least, the public clouds being offered by the likes of Amazon and Microsoft - they’re just in it to squeeze you dry, and take control away from you, and I don’t like that one bit. If I were to lean towards cloud anything, it be rolling my own, maybe using something like a Linode VM with a GPU, but the pricing doesn’t justify it if you’re looking any anything beyond casual usage. And that’s one of the things I like about PC, I could have it running 24x7 if I wanted to and not worry about getting a $200 bill at the end of the month, like I got with Azure, because scummy Microsoft didn’t explain anywhere that you’d be paying for bastion even if the VM was fully powered off…

    Anyways, back to the topic of CPUs, I don’t really think we’re at the cusp of any re-imagining, what we’ve been seeing is just gradual and natural improvements, maybe the PC taking inspiration from the mobile world. I haven’t seen anything revolutionary yet, it’s all been evolutionary. At the most, I think we’d see more ARM-like models, like the integrated RAM you mentioned, more SoC/integrated solutions, maybe AI/ML cores bring the new thing to look for an a CPU, maybe ARM itself making more inroads towards the desktop and laptop space, since Apple have shown that you can use ARM for mainstream computing.

    On the revolutionary side, the things I’ve been reading about are stuff like quantum CPUs or DNA computers, but these are still very expiremental, with very niche use-cases. In the future I imagine we might have something like a hybrid semi-organic computer, with a literal brain that forms organic neural networks and evolves as per requirements, I think that would be truly revolutionary, but we’re not there yet, not even at the cusp of it. Everything else that I’ve seen from the likes of Intel and AMD, have just been evolutionary.






  • If people really felt strongly about this, we would’ve seen it being done already. Perhaps the state of Lemmy right now is “good enough” so folks don’t care too strongly about a lack of a minor feature, or maybe they find it easier to just migrate to something like Kbin instead and still be federated to Lemmy. Or maybe they prefer to just write a simple patch, which can be maintained and distributed separately, instead of forking the entire code. Afterall, it’s easy enough to make a fork, but a PITA to maintain one. Much more easier to just make a separate patch set or standalone utilities or something.

    Also, frontend features, like the infinite scrolling one which was quoted, are basically non-issues, considering so many good alternative frontends exist, such as Photon, Alexandrite, mlymm, slemmy, etc. There’s no rule you have to use the default frontend. In fact many Lemmy instances have decided to host these frontends on their own servers, and if they wanted to, they could easily switch to it and make it the default landing page.


  • That’s what I’m saying here. The online circle that considers that transparency and control are the primary reason to choose software at the expense of feature limitations or poor UX is a very small niche

    And what I’m saying is, why does that matter here? The argument was about whether or not the opensource world exists, and has nothing to do with how big or small this niche is.

    That is important because sometimes open source devs forget about that and don’t focus enough on the things that matter to consumers. And sometimes the open source community, such as it is, will excuse this or even take pride on working around it on the basis of that performative sense of belonging and righteousness. I think that’s a risk for everybody, which is the part that annoys me about it.

    I don’t see what’s wrong with that or why it should annoy you? If you disagree with the dev’s philosophy, then fork the software and fix it yourself, that’s the beauty of opensource - you don’t need to agree with the dev or wait for them. And if you don’t have the skills to fix it yourself, sponsor someone who can. Or just use a different software. No one’s holding a gun against your head and forcing you here. There’s no reason it should annoy you.


  • Could have fooled me, because I have maybe half a dozen Android installs on devices that run all the same applications and are functionally identical to any manufacturer version out there without being related to them at all.

    ???

    there is no major concern for most people about where their Android build is sourced as long as it runs Android apps.

    And that’s not my point at all. As I mentioned earlier, I only mentioned Android because it’s the only mainstream mobile OS which allows sideload apps and has alternative app store. Whether Android in itself is opensource or not is irrelevant in this context, when I’m discussing specifically about Android apps, as an example. Also, I never claimed it was a concern for “most” people, and again, that’s besides the point.

    the open source “world” is not dictated by being built on open source code and instead dictated by a label of purity based on the lack of proprietary, monetized or closed source portions then… yeah, that’s annoying. It’s computer veganism

    Actually, it isn’t. It (the motivation for opensource) has nothing to do any of the things you mentioned, but more about transparency and control for end users (and faster development lifecycles for developers). As I’ve repeatedly mentioned, people are increasingly getting sick of their apps being filled with ads and trackers and all the corporate spying and data harvesting, and the general enshittification of services. Which is one of the factors driving end users seeking out opensource software.


  • People on Android are on an open source OS

    No they’re not, at least, not by default. The Android that’s pre-installed on most phones is actually closed source, the only reason I mentioned Android is because it’s the only mainstream mobile OS which allows you to sideload apps and even install alternative app stores. There’s regular threads here and even back on Reddit showcasing opensource apps, and even people asking for opensource alternatives.

    There’s most certainly an opensource world, whether you acknowledge it or not, and I don’t see why it’s “annoying”.