Normally there would be a project update here, but I'm working on something a little bigger and more involved than usual. It's not done yet, and it doesn't lend itself to being displayed half-finished. So instead, please enjoy a little info about the current state of AI development in general, courtesy of the head of a failed startup: The End of Starsky Robotics
Read any article about AI being developed by a present-day academic or corporate research team, and there's a good chance that it's nothing like Acuitas. Today's most popular AIs are based on artificial neural networks, whose special ability is learning categories, procedures, etc. from the statistics of large piles of data. But as Stefan says, "It isn’t actual artificial intelligence akin to C-3PO, it’s a sophisticated pattern-matching tool." At best, it only implements one of the skills a complete mind needs. Starsky Robotics tried to make up for AI's weaknesses by including a human teleoperator, so that their trucks were "unmanned," but not fully self-driving.
Academic and corporate teams have far more man-hours to work on AI than I do, but they're also pouring some of their efforts down a rabbit hole of diminishing returns. "Rather than seeing exponential improvements in the quality of AI performance (a la Moore’s Law), we’re instead seeing exponential increases in the cost to improve AI systems — supervised ML seems to follow an S-Curve. The S-Curve here is why Comma.ai, with 5–15 engineers, sees performance not wholly different than Tesla’s 100+ person autonomy team."
The debate rages around what we should do next to innovate our way off that S-Curve. As a symbolic AI, Acuitas is something of a throwback to even older techniques that were abandoned before the current wave of interest in ANNs. But tackling AI at this conceptual level is the approach that comes most naturally so me, so I want to put my own spin on it and see how far I get.
A thing I've observed from a number of AI hopefuls now, is what I'll call "the search for the secret intelligence sauce." They want to find one simple technique, principle, or structure (or perhaps a small handful of these) that, when scaled up or repeated millions of times, will manifest the whole of what we call "intelligence." Put enough over-simplified neurons in a soup kettle and intelligent behavior will "emerge." Something from almost-nothing. Hey, poof! My own intuition is that this is not at all how it works. I suspect rather that intelligence is a complex system, and can only be invented by the heavy application of one's own intelligence. Any method that tries to offload most of the work to mathematical inevitabilities, or to the forces of chance, is going to be unsatisfactory. (If you want a glimpse of how devastatingly complicated brains are, here is another fascinating article: Why the Singularity is Nowhere Near)
This is of course my personal opinion, and we shall see!
So far Project Acuitas has not been adversely impacted by COVID-19. The lab assistant and I are getting along famously.
Until the next cycle,
Jenny Sue
Even when you know exactly what Intelligence is and exactly how it works and you have a very good idea of how to build it, it's still a gargantuan task. Bootstrapping it requires solving the NLP problems and building solid NLU, for example.
ReplyDeleteThen there is the matter of defining suitable Ontology - an expert profession onto itself. Then comes the automatic filling of that ontology - or the system will merely be a rather trivial proof of concept. Reliably filling ontologies from text corpora is not SOTA yet bec. it requires better NLU. And we don't merely need "static" ontologies but also Episodic Memory (as opposed to Semantic Memory).
Then the Intelligence itself would be MUCH more complex than a VERY complex Chess program, and writing good, or even mediocre, Chess programs is only feasible for the best of developers.
Etcetera, etcetera..
Based on my own experience "gargantuan" is exactly the right word.
Delete