Just got back from a small (i.e., highly selective đ scientific meeting in the UK. The topic was modern machine learning (including deep nets, graph nets, transformers, etc.), their operation, strengths, and limitations. My own talk was about why these approaches can never approach "strong AI", the recent "news" about GPT-3 "achieving sentience" notwithstanding.
This is a photo of the attendees.
Kudos to anyone who can identify anyone in the photo.
Footnote: A Google employee was recently reprimanded for declaring that GPT-3 had achieved "sentience" based on its ability to "converse". He has hired it a lawyer to represent it as a sentient being. This case is going to be heard by SCOTUS, a body of people decidedly unqualified to opine on the question of machine intelligence. More recently still, GPT-3 was asked to write a scientific paper about itself. The only (barely mentioned) flaw in its ability to "write" said paper is that its creators had to coax it into the process by asking it a series of detailed questions. In other words, it could not craft the paper on its own. This is no surprise: Doing something as complex as writing a paper entails having an understanding of the structure of an argument--an ability that is well beyond GPT-3, which is basically only capable of responding to simple questions/requests. (More formally, it is capable of computing mappings from inputs, including questions, to outputs, i.e., answers.) By contrast, composing a whole paper requires a cognitive architecture that can form goals and organize its knowledge hierarchically--tasks that are well beyond the abilities of any simple input-output mapping engine. For students of the Philosophy of Mind, the strengths and limitations of GPT-3 are a wonderful demonstration of the limitations of the Turing test of machine intelligence.