Are you sure they don’t use the same exact type of neural network, but are just trained on different datasets? Do you have any link that shows those cancer diagnosis AIs use a different technology?
Edit: nvm, I found it. Those ai diagnostic tools uses convolutional neural network (CNN) which is not the same as LLMs.
I reckon it’s fair to refer to them both under the broad term “A.I.”, though, even if it is technically incorrect. The semi-evolved have all decided to call it A.I., so were all calling it A.I., I guess.
A similar type of machine learning (neural networks, transformer model type thing), but I assume one is built and trained explicitly on medical records instead of scraping the internet for whatever. Correct me if I am wrong!
@YourMomsTrashman A purpose-designed system might have the same underlying POTENTIAL for garbage output, IF you train it inappropriately. But it would be trained on a discretely selected range of content both relevant to its purpose, and carefully vetted to ensure it’s accurate (or at least believed to be).
A cancer-recognizing system, for example, would be trained on known examples of cancer, and ONLY that.
@YourMomsTrashman I’m no expert, but my sense is that you’re probably correct. This seems to me a version of the long-understood GIGO principle in computing (Garbage In, Garbage Out), also a principle in nearly all forensics of any kind. Your output can only be as good as your input.
Most of our general-use ‘AI’ (scorn quotes intentional) has been trained on an essentially random corpus of any and all content available, including a lot of garbage.
Seems it does a good job at some medical diagnosis type stuff from image recognition.
That isn’t an LLM though. That’s a different type of Machine Learning entirely.
What’s the difference? I thought they both use the same underlying technology?
Sure, but that’s kind of like saying simple addition and fourier transforms are the same because they both use numbers.
Are you sure they don’t use the same exact type of neural network, but are just trained on different datasets? Do you have any link that shows those cancer diagnosis AIs use a different technology?
Edit: nvm, I found it. Those ai diagnostic tools uses convolutional neural network (CNN) which is not the same as LLMs.
I reckon it’s fair to refer to them both under the broad term “A.I.”, though, even if it is technically incorrect. The semi-evolved have all decided to call it A.I., so were all calling it A.I., I guess.
A similar type of machine learning (neural networks, transformer model type thing), but I assume one is built and trained explicitly on medical records instead of scraping the internet for whatever. Correct me if I am wrong!
@YourMomsTrashman A purpose-designed system might have the same underlying POTENTIAL for garbage output, IF you train it inappropriately. But it would be trained on a discretely selected range of content both relevant to its purpose, and carefully vetted to ensure it’s accurate (or at least believed to be).
A cancer-recognizing system, for example, would be trained on known examples of cancer, and ONLY that.
@YourMomsTrashman I’m no expert, but my sense is that you’re probably correct. This seems to me a version of the long-understood GIGO principle in computing (Garbage In, Garbage Out), also a principle in nearly all forensics of any kind. Your output can only be as good as your input.
Most of our general-use ‘AI’ (scorn quotes intentional) has been trained on an essentially random corpus of any and all content available, including a lot of garbage.
A purpose-designed system would not be.