SNW at least had some bangers, the Elysium kingdom episode captured that TNG magic
- 0 Posts
- 25 Comments
Techlos@lemmy.dbzer0.comto
Free and Open Source Software@beehaw.org•A curated list of awesome FOSS games
3·7 days agoThere’s even an android port, although it’s pretty hard to do the ship twiddles on touchscreen lol
Techlos@lemmy.dbzer0.comto
Free and Open Source Software@beehaw.org•A curated list of awesome FOSS games
11·7 days agoA suggestion - open Tyrian
https://github.com/opentyrian/opentyrian
One of the greatest shmups ever, the creator open sourced it a while ago. Still weirdly addictive too, only DOS era game I still play regularly
Techlos@lemmy.dbzer0.comto
Asklemmy@lemmy.ml•How can I ensure any videos I record on my phone aren't deleted by the cops?
1·9 days agoHmm, CIFS document provider on fdroid might work?
Techlos@lemmy.dbzer0.comto
Asklemmy@lemmy.ml•How can I ensure any videos I record on my phone aren't deleted by the cops?
6·10 days agoset up a local VPN on your NAS/file server and add your phone to the VPN. Next, map a local drive on your phone to the NAS. Dunno if you can do that on iOS, but at least on android you can. Disable modify and delete permissions on the NAS folder, so that the phone can only write/read from it, then use a camera app that lets you choose the save location and set it to record video to the network drive. Requires a pretty solid connection, but is extremely robust.
Techlos@lemmy.dbzer0.comto
Mildly Infuriating@lemmy.world•Playback speed past X2 is now a YouTube paid featureEnglish
292·10 days agoIdk what to tell you, a lot of youtubers talk pretty fuckin slow
Still the best way to watch wrestling. Toni storm hits different in fuzzy 4:3 PAL format
Techlos@lemmy.dbzer0.comto
Not The Onion@lemmy.world•Don’t say ‘Watch out for ice’: FEMA warned storm announcements could invite memesEnglish
3·10 days agoA water balloon full of saturated brine. This is a standalone sentence that could be totally unrelated to the prior conversation, I just like salty water.
Also unrelated, but wet clothes can be hundreds of times less insulating than dry clothes, so stay dry in the winter storm!
Techlos@lemmy.dbzer0.comto
World News@lemmy.world•German media likens US border patrol official’s coat to ‘Nazi look’English
121·10 days agoProbably wherever the trees are exploding
Make bread, cook bread. Put some bread in water, let it get the funny mold. Cook the rest so it’s toasty. Strain the moldy bread water, chill the bread water. Cook down the bread soup gunk until it’s a paste, smear it on the cooked cooked bread and drink the bread water.
Vegemite on toast with a beer on the side is weird when you spell it out.
Techlos@lemmy.dbzer0.comto
Technology@lemmy.world•Bandcamp bans purely AI-generated music from its platformEnglish
26·19 days agoon average, i’m earning 3x as much from bandcamp FLAC sales compared to what i earn off streaming. Hasn’t been ruined so far.
It’s not a stand-in. It’s repurposing the meme format to focus on the current figureheads of various oppressive states in the current age, the fact that the original used communism has no bearing on the rework. There is no implied, implicit or explicit linking between the subjects of the original and of the rework, save for their relative locations in the image.
I hope this removes any room for further misinterpretation
Techlos@lemmy.dbzer0.comto
News@lemmy.world•In 'Unhinged' Rant, Miller Says US Has Right to Take Over Any Country For Its Resources
4·27 days agoFlensing seems worth a shot
It’s the good shit
If you want that long tail, bandcamp and soundcloud are better sources. The barrier to entry is low with those, and there’s a plethora of small, niche artists just doing their own thing.
For a representative snapshot of music though, it’s pretty amazing. It shows what a massive percentage of the planet listens to, preserved hopefully across many seeds, and historians will love shit like this in the future.
Techlos@lemmy.dbzer0.comto
Lemmy Shitpost@lemmy.world•And they even get a seizure when you take their ipads away
1·5 months agodeleted by creator
Techlos@lemmy.dbzer0.comtoMicroblog Memes@lemmy.world•AI bro discovering imaginationEnglish
6·5 months agoI bet you can train yourself to see mental images
my own experience leads me to believe this isn’t always possible; I’ve tried previously to ‘train’ a visual imagination for years with no results. Combined with finding out that even DMT doesn’t give me visuals, i’m pretty sure it’s impossible for my brain to picture anything.
Techlos@lemmy.dbzer0.comto
Not The Onion@lemmy.world•People Are Furious That OpenAI Is Reporting ChatGPT Conversations to Law EnforcementEnglish
6·5 months agothis is a good guide on a standard approach to agentic tool use for LLMs
For conversation history:
this one is a bit more arbitrary, because it seems whenever someone finds a good history model they found a startup about it so i had to make it up as i went along.
First, BERT is used to create an embedding for each input/reply pair, and a timestamp is attached to give it some temporal info.
A buffer keeps the last 10 or so input/reply pairs beyond a token limit, and periodically summarise the short-term buffer into paragraph summaries that are then embedded; i used an overlap of 2 pairs between each summary. There’s an additional buffer that did the same thing with the summaries themselves, creating megasummaries. These get put into a hierarchical database where each summary has a link to what generated the summary. Depending on how patient i’m feeling i can configure the queries to use either the megasummary embedding or summary embedding for searching the database.
During conversation, the input as well as the last input/reply get fed into a different instruction prompt that says “Create three short sentences that accurately describe the topic, content, and if mentioned the date and time that are present in the current conversation”. The sentences, along with the input + last input/reply, are embedded and used to find the most relevant items in the chat history based on a weighed metric of 0.9*(1-cosine_distance(input+reply embedding, history embedding)+0.1*(input.norm(2)-reply.norm(2))^2 (magnitude term worked well when i was doing unsupervised learning, completely arbitrary here), which get added in the system prompt with some instructions on how to treat the included text as prior conversation history with timestamps. I found two approaches that worked well.
for lightweight predictable behaviour, take the average of the embeddings and randomly select N entries from the database, weighed on how closely they match using softmax + temperature to control memory exploration.
For more varied and exploratory recall, use N randomly weighted averages of the embeddings and find the closest matches for each. Bit slower because you have way more database hits, but it tends to perform much better at tying relevant information from otherwise unrelated conversations. I prefer the first method for doing research or literature review, second method is great when you’re rubber ducking an idea with the LLM.
an N of 3~5 works pretty well without taking up too much of the context window. Including the most relevant summary as well as the input/reply pairs gives the best general behaviour, strikes a good balance between broad recall and detail recall. The stratified summary approach also lets me prune input/reply entries if it they don’t get accessed much (whenever the db goes above a size limit, a script prunes a few dozen entries based on retrieval counts), while leaving the summary to retain some of the information.
It works better if you don’t use the full context window that the model is capable of. Transformer models just aren’t that great at needle in a haystack problems, and i’ve found a context window of 8~10k is the sweet spot for the model both paying attention to the conversation recall as well as the current conversation for the qwen models.
A ¿fun? side effect of this method is that using different system prompts for LLM behaviour will affect how the LLM summarises and recalls information, and it’s actually quite hard to avoid the LLM from developing a “personality” for lack of a better word. My first implementation included the phrase “a vulcan-like attention to formal logic”, and within a few days of testing each conversation summary started with “Ship’s log”, and developed a habit of calling me captain, probably a side effect of summaries ending up in the system prompt. It was pretty funny, but not exactly what i was aiming for.
apologies if this was a bit rambling, just played a huge set last night and i’m on a molly comedown.
Techlos@lemmy.dbzer0.comto
Not The Onion@lemmy.world•People Are Furious That OpenAI Is Reporting ChatGPT Conversations to Law EnforcementEnglish
7·5 months agoi’m running 2x 3060 (12gb cards, vram matters more than clockspeed usually) and if you have the patience to run them the 30b qwen models are honestly pretty decent; if you have the ability and data to fine-tune or LORA them to the task you’re doing, it can sometimes exceed zero shot performance from SOTA subscription models.
the real performance gains come from agentifying the language model. With access to wikipedia, arxiv, and a rolling embedding database of prior conversations, the quality of the output shoots way up.


Ahh, cocaine and Cialis. A classic combo for cardiac catastrophe