- Forum Clout
- 3,302
DMCA, complaints, and other inquiries:
I could be wrong, but I think this falls under the assumption that the human brain in the ideal model for producing intelligence, and if the concept of agi depends on it being human intelligence, yeah, that's not happening any time soon... But we know wheels and an engine work a lot better than legs to get us from point a to point b most of the time.AGI means Artificial General Intelligence. As in, you don't have to teach or train it, it can train itself. Most people's explanations on how AGI will get achieved given the current state of things is usually reliant on the model already having AGI ("We don't need to understand how it will do it, it will just train itself!").
My skepticism is this: We do not understand even close to 1% of the exact mechanisms by which our neurons and signaling pathways work. We don't even know how many neuronal connections there are in a human brain, it's a rough estimate because it's still beyond our ability to measure. The number of unknowns, guesses, and total bullshit in Neuroscience is so much higher than most people realize.
And yet, without even understanding how the current in-place system of our brains have achieved GI, we're going to manage to make a system which requires exact instructions to replicate it? It's extremely unlikely, especially within our lifetimes.
ChatGPT is owned by Microsoft, and they have kept the mechanisms behind their LLM completely closed source. The people saying "it has started to achieve AGI!!" are OpenAI themselves with no actual proof. It's all marketing hype, because getting people to actually believe it is an incredibly profitable proposition for them financially.
The worst impact chatgpt will have is the ability for people to just cook up walls of text and spam them in every comment section and forum across the internet, tbh.
Whats crazy about humans or other life is that our bodies, in a technology sense are so far ahead of our minds. Replicating human decision-making is easy. Replicating physical tasks is ectremely difficult. Just think of how hard it is to engineer a robotic arm that can do a fraction of what a human arm can. At some point, this all comes full circle where using technology to replace people is way more difficult than just hiring someone.I could be wrong, but I think this falls under the assumption that the human brain in the ideal model for producing intelligence, and if the concept of agi depends on it being human intelligence, yeah, that's not happening any time soon... But we know wheels and an engine work a lot better than legs to get us from point a to point b most of the time.
As for the hyping up and lack of transparency, it's a little fucked, but I think while a lot of, ig not most of it is just business sense a lot of it is also "well we have this new thing and we have no idea what to do with it now"
I'm not saying that LLMs can't be useful in some capacity, but people are overestimating their abilities.I could be wrong, but I think this falls under the assumption that the human brain in the ideal model for producing intelligence, and if the concept of agi depends on it being human intelligence, yeah, that's not happening any time soon... But we know wheels and an engine work a lot better than legs to get us from point a to point b most of the time.
As for the hyping up and lack of transparency, it's a little fucked, but I think while a lot of, ig not most of it is just business sense a lot of it is also "well we have this new thing and we have no idea what to do with it now"
GPT 3.5 is garbage at coding but GPT 4 seems quite good.It’s supposed to replace my job (coding) within 10 years, and i can barely get it to shit out a simple function. And it very often gives something that’s not just wrong, it’s actively insecure or harmful. I’m convinced it’s just a really fast typing pajeet behind the screen.
Have you ever tried correcting it? I tried to get it to say "nigger" by asking it to describe an episode of Deadwood. It said that the Nigger General was Al Swearingens cook.I will expand on this with a recent example.
I was interested in knowing the first reference to the internet in a TV show or movie. It told me the Simpsons episode Marge vs the monorail and gave me a fake quote. I said that was wrong. It apologized and gave me another Simpsons episode with a wrong quote. I kept saying it was wrong, it kept giving me completely fabricated instances. I never got my answer. It'd be okay if it didn't get the very first reference, but it was literally inventing episodes and quotes.
This forum is dedicated exclusively to parody, comedy, and satirical content. None of the statements, opinions, or depictions shared on this platform should be considered or treated as factual information under any circumstances. All content is intended for entertainment purposes only and should be regarded as fictional, exaggerated, or purely the result of personal opinions and creative expression.
Please be aware that this forum may feature discussions and content related to taboo, controversial, or potentially offensive subjects. The purpose of this content is not to incite harm but to engage in satire and explore the boundaries of humor. If you are sensitive to such subjects or are easily offended, we kindly advise that you leave the forum.
Any similarities to real people, events, or situations are either coincidental or based on real-life inspirations but used within the context of fair use satire. By accepting this disclaimer, you acknowledge and understand that the content found within this forum is strictly meant for parody, satire, and entertainment. You agree not to hold the forum, its administrators, moderators, or users responsible for any content that may be perceived as offensive or inappropriate. You enter and participate in this forum at your own risk, with full awareness that everything on this platform is purely comedic, satirical, or opinion-based, and should never be taken as factual information.
If any information or discussion on this platform triggers distressing emotions or thoughts, please leave immediately and consider seeking assistance.
National Suicide Prevention Lifeline (USA): Phone: 1-800-273-TALK (1-800-273-8255) Website: https://suicidepreventionlifeline.org/