Supposedly AI is going to take all the jobs and yet it still can’t do this task which it seems perfect for. Sure, eventually AI will get good enough to do it in the future, but there is just way too much hype given the reality of the current situation. This is a job that fast food workers are already required to do in addition to other duties, so it’s not like it’s labor saving from the company’s perspective either.
There is no certainty that LLMs can overcome the current limitations they are stumbling on.
I think developments in AI will come but there is no guarantee they will. They seem to be suffering from the Pareto Principle just like self-driving car ML models and this despite huge investments.
Breakthroughs are so interesting and the reason predicting the future of tech is so hard. Text embedding and “Internet scale” training are likely the things that allowed this AI boom and the amazing initial results.
I think many people see AI (and other tech) moving linearly from the current point forward but any software developer knows this is rarely the case. And no one can predict the next breakthrough.
It doesn’t help the hype and confusion around ML/LLM/AGI. And because on the surface LLMs seem intelligent people misunderstand their capabilities (much like politicians). They certainly have fantastic uses just as they are now but a lot of people are overly optimistic (or pessimistic depending on your point of view) of our new “AI overlords”.
Personally, LLMs are absolutely amazing at supporting me in my professional writing. I don’t let it do my work but it helps me play around to find a better way to express some things like if I had a sparing writing partner.
Supposedly AI is going to take all the jobs and yet it still can’t do this task which it seems perfect for. Sure, eventually AI will get good enough to do it in the future, but there is just way too much hype given the reality of the current situation. This is a job that fast food workers are already required to do in addition to other duties, so it’s not like it’s labor saving from the company’s perspective either.
There is no certainty that LLMs can overcome the current limitations they are stumbling on.
I think developments in AI will come but there is no guarantee they will. They seem to be suffering from the Pareto Principle just like self-driving car ML models and this despite huge investments.
deleted by creator
Breakthroughs are so interesting and the reason predicting the future of tech is so hard. Text embedding and “Internet scale” training are likely the things that allowed this AI boom and the amazing initial results.
I think many people see AI (and other tech) moving linearly from the current point forward but any software developer knows this is rarely the case. And no one can predict the next breakthrough.
It doesn’t help the hype and confusion around ML/LLM/AGI. And because on the surface LLMs seem intelligent people misunderstand their capabilities (much like politicians). They certainly have fantastic uses just as they are now but a lot of people are overly optimistic (or pessimistic depending on your point of view) of our new “AI overlords”.
Personally, LLMs are absolutely amazing at supporting me in my professional writing. I don’t let it do my work but it helps me play around to find a better way to express some things like if I had a sparing writing partner.