9 Comments

I made that same within-a-decade prediction in a conference talk early last year.

I posed the question in a Q&A at a panel at SXSW earlier this week, three of the four panelists said within 5 years; one said in TWO years.

Expand full comment

I think Chomsky summed this up quite nicely

Doesn't think, doesn't reason and doesn't understand. Has no workdy knowledge, just statistical likelihoods of word sequences in context of other word sequences. The illusion of understanding. Doesn't matter how large you scale it, it will never think or reason.

https://web.archive.org/web/20230310163612/https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html

Expand full comment

One can debate whether it can "really" understand or reason, but strikes me as an irrelevant philosophical point rather than a comment on the actual utility of the technology.

Expand full comment

True. Something is either useful or not. I agree 100%. The issue for me is that because it has no world knowledge or ability to reason that it can't know if something is right (ethically or factually) - it only "knows" that one bunch of words are statistically more likely to follow another bunch of words based on whatever training material its been fed. It puts all the onus on the consumer to do the right thing. You need to check all output for whether it is correct or not, and if you're not an expert in the subject matter in hand, you're forced to cross reference to other sources or blindly trust it.

Expand full comment

Time between release dates does not equate to time between dev and training dates. This can be developed in parallel and not released until desired.

Expand full comment

Footnote 2 above

Expand full comment

Agree. The rate of change in this and technology in general is exponential in nature. You can't help but wonder where it will end up in a year or three.

Expand full comment

Human language isn't "laden" with metaphors, it is bejeweled with them.

"GPT-4 supports reasoning..." -- nope, no reasoning here. It accepts images as prompts, just as it accepts text as prompts, and it guesses a high likelihood successor to those prompts. That's not reasoning.

Expand full comment

I like your word choice!

Expand full comment