This is… “better” than expected?
Now the question to consider is how many 5 second video requests are made of AI every minute?
Well I’ve been running my microwave for an hour and a half and I still don’t have a 5 second video of a dog doing the charleston to show my grandkids what the actual heck
and the hotpocket is still cold in the middle
Breaking down energy onto a request by request basis is pointless and meaningless.
wait does that compare favorably to regular film production?
I would imagine it’s pretty close to the energy costs of 3D animation rendering but that’s purely conjecture.
finally some real data on this
It’s still a lot of extrapolation but better than before. Researchers studied the energy usage of open-source LLMs, image and video generators. Then, based off this Microsoft paper and interviews with other experts, doubling the GPU’s energy usage roughly accounts for the energy usage of everything else involved. An interesting finding is that stable diffusion image generation is often less energy-intensive than prompting high-parameter LLMs (around five seconds of microwaving).
But there are still so many variables like what kind of model, the amount of parameters, the complexity of a prompt, what data center your prompt goes to and what the energy mix in its area is like… The researchers still make a point to emphasise how opaque companies are about their closed models.
good points
I can generate a 5-second AI video on my local LTX-2 model in less than a minute. Cmon now…
Why is the linked technology review website so shit

You would think the MIT could have spent some of Epstein’s money on web design.







