25+ yr Java/JS dev
Linux novice - running Ubuntu (no windows/mac)

  • 0 Posts
  • 31 Comments
Joined 8 months ago
cake
Cake day: October 14th, 2024

help-circle




  • And avoiding regulations isn’t the only reason big farms are more efficient nor do small farms always comply with every regulation, escaping by being too small to notice.

    Regulations are orthogonal to the question of efficiency. Regulations don’t exist to affect efficiency, they exist to enforce the common good at any scale.

    So I’m not defending mega farms. I’m certainly not suggesting anyone should avoid regulation. I’m just saying economy of scale applies to farms and to suggest otherwise undermines any other argument one might make—if one argument in your quiver comes from a place of ignorance, what might that say about the rest?

    Regulate farms. Examine the subsidies. Target the abuses. By all means, there is nothing morally good about mega farms. It’s just that I’m yet to see a good argument against them on the basis of efficiency.

    The argument against here should be, “any gains in efficiency are overshadowed by unethical treatment of animals, risk of population collapse due to rampant infections, and regulatory capture by farm lobbies.” Or some such.

    All IMO, of course. Fight your battles as you will. But I will call out fallacies of friends and foes alike because I don’t think they serve anyone.





  • I found out about this about a year ago while I was laid off. It coincided with when the massive layoffs began. Seems pretty likely to me. Developer salaries aren’t low and to lose another 80% on top is a big hit.

    Also a lot of my coworkers are really nervous about immigration right now. This is a bad time to be an Indian tech worker in the US. My team of about 10 could wind up reduced to me and one other guy. We’d even lose our manager and every PM. And this team is responsible for critical software at a major company.


  • It’s a massive new disruptive technology and people are scared of what changes it will bring. AI companies are putting out tons of propaganda both claiming AI can do anything and fear mongering that AI is going to surpass and subjugate us to back up that same narrative.

    Also, there is so much focus on democratizing content creation, which is at best a very mixed bag, and little attention is given to collaborative uses (which I think is where AI shines) because it’s so much harder to demonstrate, and it demands critical thinking skills and underlying knowledge.

    In short, everything AI is hyped as is a lie, and that’s all most people see. When you’re poking around with it, you’re most likely to just ask it to do something for you: write a paper, create a picture, whatever, and the results won’t impress anyone actually good at those things, and impress the fuck out of people who don’t know any better.

    This simultaneously reinforces two things to two different groups: AI is utter garbage and AI is smarter than half the people you know and is going to take all the jobs.



  • I think a lot of ground has been covered. It’s a useful technology that has been hyped to be way more than it is, and the really shitty part is a lot of companies are trying to throw away human workers for AI because they are that fucking stupid or that fucking greedy (or both).

    They will fail, for the most part, because AI is a tool your employees use, they aren’t a thing to foist onto your customers. Also where do the next generation of senior developers come from if we replace junior developers with AI? Substitute in teachers, artists, copy editors, others.

    Add to that people who are too fucking stupid to understand AI deciding it needs to be involved in intelligence, warfare, police work.

    I frequently disagree with the sky is falling crowd. AI use by individuals, particularly local AI (though it’s not as capable) is democratizing. I moved from windows to Linux two years ago and I couldn’t have done that if I hadn’t had AI to help me troubleshoot a bunch of issues I had. I use it all the time at work to leverage my decades of experience in areas where I’d have to relearn a bunch of things from scratch. I wrote a Python program in a couple of hours having never written a line before because I knew what questions to ask.

    I’m very excited for a future with LLMs helping us out. But everyone is fixated on AI gen (image, voice, text) but it’s not great at that. What it excels at is very quickly giving feedback. You have to be smart enough to know when it’s full of shit. That’s why vibe coding is a dead end. I mean it’s cool that very simple things can be churned out by very inexperienced developers, but that has a ceiling. An experienced developer can also leverage it to do more faster at a higher level, but there is a ceiling there as well. Human input and knowledge never stops being essential.

    So welcome to Lemmy and discussion about AI. You have to be prepared for knee-jerk negativity, and the ubiquitous correction when you anthropomorphize AI as a shortcut to make your words easier to read. There isn’t usually too much overtly effusive praise here as that gets shut down really quickly, but there is good discussion to be had among enthusiasts.

    I find most of the things folks hate about AI aren’t actually the things I do with it, so it’s easy to not take the comments personally. I agree that ChatGPT written text is slop and I don’t like it as writing. I agree AI art is soulless. I agree distributing AI generated nudes of someone is unethical (I could give a shit what anyone jerks off to in private). I agree that in certain niches, AI is taking jobs, even if I think humans ultimately do the jobs better. I do disagree that AI is inherently theft and I just don’t engage with comments to that effect. It’s unsettled law at this point and I find it highly transformative, but that’s not a question anyone can answer in a legal sense, it’s all just strongly worded opinion.

    So discussions regarding AI are fraught, but there is plenty of good discourse.

    Enjoy Lemmy!


  • One of the things I miss about web rings and recommended links is it’s people who are passionate about a thing saying here are other folks worth reading about this. Google is a piss poor substitute for the recommendations of people you like to read.

    Only problem with slow web is people write what they are working on, they aren’t trying to exhaustively create “content”. By which I mean, they aren’t going to have every answer to every question. You read what’s there, you don’t go searching for what you want to read.



  • Most people don’t care about decentralization

    I think that’s largely not the case for people that are currently on Lemmy/Mastodon, but I think you’re right that it prevents larger adoption. I’m okay with that, though. I don’t need to talk with everyone. There’s room for more growth, probably especially for more niche communities, but at least for me Lemmy has hot critical mass.

    Everything else I either like the things you dislike or disagree that they are problems.





  • MagicShel@lemmy.ziptoTechnology@lemmy.worldAi Code Commits
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    11 days ago

    An LLM providing “an opinion” is not a thing

    Agreed, but can we just use the common parlance? Explaining completions every time is tedious, and most everyone talking about it at this level always knows. It doesn’t think, it doesn’t know anything, but it’s a lot easier to use those words to mean something that seems analogous. But yeah, I’ve been on your side of this conversation before and let’s just read all that as agreed.

    this would not have to reach either a human or an AI agent or anything before getting fixed with little resources

    There are tools that do some of this automatically. I picked really low hanging fruit that I still see every single day in multiple environments. LLMs attempt (wrong word here, I know) more, but they need review and acceptance by a human expert.

    Perfectly decent looking “minor fixes” that are well worded, follow guidelines, and pass all checks, while introducing an off by one error or suddenly decides to swap two parameters that happens to be compatible and make sense in context are the issue. And those, even if rare (empirically I’d say they are not that rare for now) are so much harder to spot without full human analysis, are a real threat.

    I get that folks are trying to fully automate this. That’s fucking stupid. I don’t let seasoned developers commit code to my repos without review, why would I let AI? Incidentally, seasoned developers also can suggest fixes with subtle errors. And sometimes they escape into the code base, or sometimes perfectly good code that worked fine on prem goes to shit in the cloud—I just had to argue my team into fixing something that executed over 10k SQL statements in some cases on a single page load due to lazy loading. That shit worked “great” on prem but was taking up to 90 seconds in the cloud. All written by humans.

    The goal should not be to emulate human mistakes, but to make something better.

    I’m sure that is someone’s goal, but LLMs aren’t going to do that. They are a different tool that helps but does not in any way replace human experts. And I’m caught in the middle of every conversation because I don’t hate them enough for one side, and I’m not hype enough about them for the other. But I’ve been working with them for several years now and watched the grow since GPT2 and I understand them pretty well. Well enough not to trust them to the degree some idiots do, but I still find them really handy.