months because the community was not happy, to attract and onboard AI services. You don't pay vendors to be part of a marketplace. It's usually the opposite. What budget has been allocated in 5 years to develop the marketplace that was the initial and main proposal? In 5 years what budget and how many people have been dedicated to this task of finding and convincing service providers and communicating everywhere on the marketplace? In my opinion if we judge by the result ... not much. Either the project doesn't interest anyone or the people in charge of recruiting suppliers are not suitable...or singularitynet is a Potemkin village
Not replying particularly to the folks insulting me in this thread, but I feel it may be useful to give a more nuanced/in-depth comment on ChatGPT and such... Certainly transformer NNs (invented at Google in 2017-18, by a team including Luke Kaiser who used to work for me on AGI R&D at Novamente LLC btw...) were a major innovation in neural architecture, and have a lot of commercial and beneficial applications. We are using them a bunch in SNet related projects including the Grace and Sophia robots, and some text summary/analytics for. Mindplex project, and for generating cool lyrics and vocals for Desdemona robot, etc. etc. Their limitations are also well known and have been summarized previously in articles by Gary Marcus, myself and others. The recent silliness with FB's Galactica AI illustrated these limitations quite well. Galactica generated scientific papers that looked sensible and realistic -- but often contained utter nonsense. ChatGPT is basically the same, but it's dealing w/ a domain (general chit-chat) where bloviating usually-sensible but occasionally-nonsensical stuff is more OK than in, say, scientific papers or medicine... I don't see especial commercial or benevolent applications for ChatGPT in itself, though of course transformers for NLP have loads of great applications when wrapped carefully and appropriately in the right software frameworks designed w/ domain knowledge etc. etc. Some folks may believe that AI systems that generate consistently meaningful stuff can be obtained by tweaking and improving transformers. I don't think so, because the way transformers work is by munging together surface-level expression patterns from a huge amount of data, not by trying to model the world or meaning or ideas behind what they are reading or saying. I believe that making AI systems that generate consistently meaningful stuff will require a totally different approach... there may be many workable approaches here, and I think what we're working on w/ OpenCog Hyperon (designed for commercialization via TrueAGI and decentralized deployment via SNet platform) is one of them... OpenAI's successes in general have not been driven by big new science innovations made at their organization, but rather by throwing big $$ at compute resources aimed at training bigger neural models based on bigger datasets than anyone else (and based on basic ideas/algos originating outside OpenAI). This is not to say what they're doing is bad, just to note what it actually is... Anyone with that amount of $$ and an OK level of modern-AI expertise and a desire could do the same as they're doing. Google isn't bothering because (though they HAVE invented a fair number of powerful original AI algos themselves) they are more focused on using their AI expertise and resources to make $$ .... I note OpenAI was seeded with an "up to $1B" grant from Musk etc. and then sold itself to M$ for $1B commitment and is now working on another round. SNet's overall cash-equivalent inflow has been more like $17-18M altogether (given fluctuating crypto exchange rates etc.), which is a different qualitative level and has not allowed the sort of big-server-farm-intensive model-building OpenAI and Google have done. To compete with these much wealthier organizations, our approach is to work on making qualitatively fundamentally better algorithms and structures (e.g. SNet platform, OpenCog Hyperon, Hypercycle, AI-DSL...). This is difficult stuff and is taking some time but since we're not richer we have to prevail by being smarter .. .by figuring out workable plans based on having a deeper understanding and better core math/algos, and systematically implementing this stuff without getting too distracted by shallower/shinier stuff others are doing or by the noise of markets or ignorant people etc. ...
May those millions of eyeballs be on your work soon
With $1B we could greatly accelerate development of OpenCog Hyperon and HyperCycle, plus the AGI chip Rachel St. Clair and I have designed at Simuli ... (and with "merely" a few tens of millions of this we could dramatically accelerate progress in all the SNet spinoffs which would then seed the usage of Hyperon/HyperCycle in their pertinent verticals...).... We might even find $20M spare or so to roll out speech-to-speech machine translation for under-resourced languages (like most African languages) thus bringing the rest of the world population into the discussion on the Singularity as it progresses at its accelerated pace.... However the way OpenAI got this $1B was by selling themselves to a big tech company, which is not a path that interests me because, while it could lead to faster technical progress on AGI, it would also lead to some entanglements that would pull away from the decentralized/democratic aspect of our mission... I.e. a $1B infusion (or half that, I'd say) into actually workable AGI designs (rather than narrow-AI tricks) could very probably get us to a Singularity in 5 years ... but if this was a Singularity owned by Big Tech and intel agencies etc., this acceleration might not be a good thing. Better to take a few years longer and have things happen in a decentralized and more-democratic way with a higher odds of broad benefit...
I love that approach of SNet 👊 Huge respect 🙏
That's a bold statement... To reach the singularity in 5 years...
Certainly an approximate statement 😉 ... Year 1 -- scaling up OpenCog Hyperon and prototyping AGI chip (http://simuli.ai) ... building metaverse infrastructure for teaching Neoterics (https://docs.google.com/document/d/1jhy80oA4RnYu5tPFfkWjF9-wytmzgMPlNSNJy2NcPSw/edit) Year 2 -- implement and test the OpenCog AGI algos on Hyperon, do limited production run of AGI chip. Fully scale up HyperCycle for decentralized Hyperon and NN operation. Year 3 -- teach the Neoterics using Hyperon (by now running on HyperCycle), scalable production run of AGI chip. Concurrently teach other Hyperon instances about biomedicine, finance, math, etch. Year 4 -- integrate the Neoterics-centered Hyperon instances w/ other Hyperon instances taught other subjects ... this is all running on the AGI chip now... Year 5 -- if you trust its ethics, let the Hyperon system start improving its own code and redesigning its hardware infra... We are working toward this same roadmap now ... but with only SNet Foundation resources at the current order of magnitude it will likely take longer ... Generating massive cash is not the only route to acceleration though. There's also the route of pulling in a broader open source community and energizing them for massive contribution. And there are hybrid scenarios where SNet is very financially successful but not quite at the "$1B R&D budget" level, and acceleration happens via a combination of ramped-up SNet Foundation work plus a broad energized OSS community. SNet and Hypercycle each getting into the "top 100" cryptos list would certainly help, and I think this is quite feasible over the next couple years. This wouldn't get us OpenAI level R&D budgets, but it would get us partway there and then the broad appeal of our decentralized/democratic-AGI vision can pull in the OSS community that will give us effectively greater dev firepower than the OpenAIs/DeepMinds of the world... which combined with the fact that we have algorithms/architectures actually capable of AGI will... well you get it...
interesting... I'm keeping Ray Kurzweil's prediction in mind, and he says it will happen in 2045. Elon Musks said in 2020 that AI could overtake humans in 2025 but that's not a singularity... So 5 years would be amazingly fast compared to their predictions. I'd love to see/experience it tho. I hope it happens rather sooner than later.
Have you thought about decentralised and distributed training of neural networks?
For training e.g. transformer NNs, one really wants cutting-edge GPUS with lots of RAM.... which means decentralized training is going to pay off well only among a decentralized network of pretty gonzo multi-GPU servers, for now... To leverage weaker processors for NN training one will need a major shift in neural architecture or neural learning algorithm. A shift to Alex Ororbia's predictive coding based neural weight learning could fit the bill here for example, we are hoping to start some joint R&D next year focused on using his learning algos to make training of InfoGan type neural generative models (with semantic structured latent variales) work better than has been possible with backprop (useful for making NNs that are tractable to interface tightly w/ symbolic systems like OpenCog reasoning); a side-effect of this could be neural generative models that are easier for training and inference on decentralized networks of OK but not super-gonzo processors.... As a parallel effort though once Simuli's AGI boards roll out a few years from now, machines w/ onboard GPUs, hypervector learners and OpenCog pattern-matching hardware may become more commonplace. Envision a home NuNet/Hypercycle/SNet/Hyperon box earning tokens for the owner by contributing AI processing to the global mind network on its AGI board...
Обсуждают сегодня