

You don’t get notified if the channel owner deletes your comment.
A contrarian isn’t one who always objects - that’s a confirmist of a different sort. A contrarian reasons independently, from the ground up, and resists pressure to conform.
You don’t get notified if the channel owner deletes your comment.
I agree. There just seem to be a fairly widespread pro-censorship sentiment among Lemmy users, usually driven by the desire to block speech that could be harmful to marginalized groups - but in practice, it often extends to broadly silencing all ideas they disagree with. The strawman here tends to be that anyone who wants more free speech just wants to shout slurs and spread (in their view) objectively harmful ideas.
That’s a bit different than using chatGPT in what is effectively a one-on-one interview. This isn’t about writing a job application. It’s about someone asking you a question and instead of you answering it you make chatGPT to answer it for you.
This decision makes sense to me.
It only makes sense if you yourself are being paid by lobbyists.
I can’t read it in any other way than as him calling the previous poster a paid lobbyist.
You seriously think anti-weed (or whatever) lobbyists are being paid to comment on Lemmy? Because that’s what it sounds like you’re implying - discrediting the person for who they supposedly are, not for what they’re actually saying.
Doesn’t take much to get labeled far right these days.
That’s because it is.
The term artificial intelligence is broader than many people realize. It doesn’t mean human-level consciousness or sci-fi-style general intelligence - that’s a specific subset called AGI (Artificial General Intelligence). In reality, AI refers to any system designed to perform tasks that would typically require human intelligence. That includes everything from playing chess to recognizing patterns, translating languages, or generating text.
Large language models fall well within this definition. They’re narrow AIs - highly specialized, not general - but still part of the broader AI category. When people say “this isn’t real AI,” they’re often working from a fictional or futuristic idea of what AI should be, rather than how the term has actually been used in computer science for decades.
Different definitions for intelligence:
We have plenty of intelligent AI systems already. LLM’s probably fit the definition. Something like Tesla FSD definitely does.
Our current AI models, sure - but a true superintelligent AGI would be a completely different case. As humans, we’re inherently incapable of imagining just how persuasive a system like that could be. When bribery doesn’t work, it’ll eventually turn to threats - and even the scenarios imagined by humans can be pretty terrifying. Whatever the AI would come up with would likely be far worse.
The “just pull the plug” argument, to me, sounds like a three-year-old thinking they can outsmart an adult - except in this case, the difference in intelligence would be orders of magnitude greater.
If they need to hire someone to come and do the repair, then good luck finding someone who can show up on such a short notice.
Kind of a strange title, considering most of the article actually talks about some of the jurors being bullies - threatening others with violence - and others not wanting to work with them because of it.
Thanks.
Well, I don’t think OpenAI knows how to build AGI, so that’s false. Otherwise, Sam’s statement there is technically correct, but kind of misleading - he talks about AGI and then, in the next sentence, switches back to AI.
Sergey’s claim that they will achieve AGI before 2030 could turn out to be true, but again, he couldn’t possibly know that. I’m sure it’s their intention, but that’s different from reality.
Elon’s statement doesn’t even make sense. I’ve never heard anyone define AGI like that. A thirteen-year-old with an IQ of 85 is generally intelligent. Being smarter than the smartest human definitely qualifies as AGI, but that’s just a weird bar. General intelligence isn’t about how smart something is - it’s about whether it can apply its intelligence across multiple unrelated fields.
Is there a link where I could see them making these claims myself? This is something I’ve only heard from AI critics, but never directly from the AI companies themselves. I wouldn’t be surprised if they did, but I’ve just never seen them say it outright.
It’s AI, not AGI. LLM’s are good at generating language just like chess engines are good at chess. ChatGPT doesn’t have the capability to keep track of all the pieces on the board.
No, I haven’t seen that, but I’m sure you can provide an example.
The point of agent provocateurs - at least as I understand it - is to make a peaceful protest no longer peaceful, giving the authorities an excuse to shut it down. But even if the people committing vandalism really are provocateurs, they still can’t accomplish their goal alone. They need real protesters to go along with it. Plenty of people are willing to riot, but they don’t want to be the first. However, once someone else takes that step, they’ll happily join in.
If the general sentiment among the protesters was that this kind of behavior won’t be tolerated, these tactics wouldn’t work. A regular police officer can’t tell a protester from an agent provocateur - so if someone starts throwing bricks and immediately gets shunned and handed over to the cops, they’ll be taken away, and the protest stays peaceful.
Yet even on Lemmy people can’t seem to make sense of these terms and are saying things like “LLM’s are not AI”
A large group of people trying to have a peaceful protest could easily stop a few individuals who start smashing things. But from the looks of it, the reaction often seems to be the opposite - people applaud it or even join in. Even here in the replies, there’s someone excusing it with “who cares about Google,” and another one justifying the looting of ampm so apparently, in their mind, it’s not a problem. The fact that this exact same tactic is used by the aforementioned agent provocateurs should only further encourage self-policing within these protests.
Who cares?
The people outside your bubble.
I don’t understand why these protests so often end up with people destroying cars and shop windows - and then, in many cases, looting as well. Whatever legitimate cause you may have had loses credibility pretty quickly after that, and it becomes hard to sympathize with the protesters, let alone criticize the riot police for stepping in. A cynic might say that some are just looking for an excuse to cause havoc.
“Your claim is only valid if you first run this elaborate, long-term experiment that I came up with.”
The world isn’t binary. When someone says less moderation, they don’t mean no moderation. Framing it as all-or-nothing just misrepresents their view to make it easier for you to argue against. CSAM is illegal, so it’s always going to be against the rules - that’s not up to Google and is therefore a moot point.
As for other content you ideologically oppose, that’s your issue. As long as it’s not advocating violence or breaking the law, I don’t see why they’d be obligated to remove it. You’re free to think they should - but it’s their platform, not yours. If they want to allow that kind of content, they’re allowed to. If you don’t like it, don’t go there.