209 похожих чатов

I have a question for @bengoertzel . I watch a

lot of interviews and podcasts from tech companies, ranging from small startups to middle-scale companies to large companies. It seems like every company is building its own AI and neural network. I've also heard a lot that the difficulty lies not in building the AI itself, as it takes only a few thousand lines of code to build an AI model (LLM), but rather in the training process and the need for large GPUs. I also watched ofcourse all podcast from you and understand that you guys are build a different system with neural, symbolic and evolutionary learning.

My quastion is What sets apart the technologies being developed by SingularityNET and OpenCog Hyperon exactly, as it looks like very much companies are building there own Ai? It's intriguing because you've been working on this for years, even decades, while Stability's CEO mentioned starting their company in 2021 and going viral by 2022, building their entire codebase in just under a year. Can you explain the exact innovation SingularityNET is bringing to the table and how this will be more innovative for companies to work with then te current tech that is in the market, what are the possibilities?

1 ответов

19 просмотров

Hmm, this is a question I have answered in great depth many many times. I know it's long but if you watch my interview with Lex Fridman it covers this ground fairly extensively, in the segment where I talk about OpenCog and its differences from deep NNs... If you have technical background look at the paper "General Theory of General intelligence" on Arxiv I have been working on a paper called "Generative AI vs. AGI" which answers this in a more explicitl "ChatGPT era" way... but it's a longish apper and I'm not going to try to repeat all the points here, the paper should come out before super long... Short answer is, I don't think LLMs or other DNNs are going to be extensible to yield human-level AGI or beyond... I think their limitations in terms of originality/creativity , multistep reasoning chains and formal/scientific/math reasoning are fundamental to their architecture.... OpenCog Hyperon has a very different approach, there is a decentralzed self-modifying knowledge metagraph that is able to support and integrate multiple sorts of AI, including LLMs and other DNNs plus logical reasoning engines, evolutionary learning systems, and etc. etc. If Hyperon develops toward AGI as we are envisioning then SNet/HypC/Nunet will play a role (perhaps among others) as a decentralized infra for Hyperon and its plugin ecosystem... About Stability AI, they're great, I note they have raised $126M which is quite a lot more than SNet + all the projects I've ever done in my life put together, so they have not done it on the cheap but they have not wasted the VC funds they raised either, I am glad they and others are there building OSS DNNs to match what Big Tech is doing... these are useful tools and can be part of integrated AGI architectures even if they are not on their own AGI-capable...

Похожие вопросы

Обсуждают сегодня

Карта сайта