• 0 Posts
  • 7 Comments
Joined 2 years ago
cake
Cake day: June 16th, 2023

help-circle
  • I’m a little younger, I grew up playing the NES. I had so much fun and some of my best memories are from playing those games with friends and stuff. But I find it really hard to revisit most of those games based on their own merit.

    There is definitely a thing about playing games together with another person that can be magical. And that isn’t gone. You can still do that today with modern games. So in that regard, I don’t think there is anything particularly special about 80s games. Heck, it wasn’t until the N64 that it was common for more than 2 people to be able to play together. A bunch of guys hanging out and all playing a game together was great.

    I think losing that is just a factor of growing up. You move on from your friends, maybe you don’t make any new ones, you start mainly playing against faceless strangers online… It’s not a problem with the games, it’s a problem with the players.







  • This is an argument of semantics more than anything. Like asking if Linux has a GUI. Are they talking about the kernel or a distro? Are some people going to be really pedantic about it? Definitely.

    An LLM is a fixed blob of binary data that can take inputs, do some statistical transformations, then produce an output. ChatGPT is an entire service or ecosystem built around LLMs. Can it search the web? Well, sure, they’ve built a solution around the model to allow it to do that. However if I were to run an LLM locally on my own PC, it doesn’t necessarily have the tooling programmed around it to allow for something like that.

    Now, can we expect every person to be fully up to date on the product offerings at ChatGPT? Of course not. It’s not unreasonable for someone to make a statement that an LLM doesn’t get it’s data from the Internet in realtime, because in general, they are a fixed data blob. The real crux of the matter is people understanding of what LLMs are, and whether their answers can be trusted. We continue to see examples daily of people doing really stupid stuff because they accepted an answer from chatgpt or a similar service as fact. Maybe it does have a tiny disclaimer warning against that. But then the actual marketing of these things always makes them seem far more capable than they really are, and the LLM itself can often speak in a confident manner, which can fool a lot of people if they don’t have a deep understanding of the technology and how it works.