That illustrates a point though. Pit Bulls tend to get bought by violent owners because of their infamy, which reinforces it and gets more people to recognize them, which yields more taught violence, and so on…
- 0 Posts
- 15 Comments
brucethemoose@lemmy.worldto Technology@lemmy.world•South Korea makes AI investment a top policy priority to support flagging growthEnglish1·7 hours agoLG’s recent Exaone release is a pretty awesome local model for the size. A little deep-fried and repetitive, but great for code and stuff, which is especially notable since most decent models tend to only be good at Mandarin Chinese and English.
…Except they slapped an insane license on it. Basically you sign away your life away even looking at it: https://huggingface.co/LGAI-EXAONE/EXAONE-4.0-32B/blob/main/LICENSE
That is not the precedent. Many 32B class (aka 16GB-24GB VRAM GPU) models are Apache licensed. Hence, the ML tinkerer community has pretty much forgotten about it.
brucethemoose@lemmy.worldto Selfhosted@lemmy.world•Lowering power consumption on OpteronEnglish3·22 hours agoI dunno about Linux, but on Windows I used to use something called K10stat to manually undervolt cores with no access to such via the BIOS. The difference was night and day dramatic, as they idled ridiculously fast and AMD left a ton of voltage headroom back then.
I bet there’s some Linux software to do it. Look up if anyone used voltage control software for desktop Phenom IIs and such.
brucethemoose@lemmy.worldto Games@lemmy.world•Steam: Updates to User Review Scores Based on LanguageEnglish11·1 day agoFull disclosure, I will sometimes cry about cryptocurrency stuff or slop. But I mined a bitcoin many moons ago, too! And I’m quantizing an LLM as I type!
brucethemoose@lemmy.worldto Games@lemmy.world•Steam: Updates to User Review Scores Based on LanguageEnglish12·1 day agoYeah.
That’s the vibe I get from Lemmygrad too, like they assume the rest of the world is constantly pondering how much they hate China, as a dominating thought.
It’s bizarre, and I am by no means generalizing Chinese people either. Some researchers I’ve interacted with (which is pretty much my only other exposure to China) are not like that at all.
brucethemoose@lemmy.worldto Technology@lemmy.world•Jimmy Wales Says Wikipedia Could Use AI. Editors Call It the 'Antithesis of Wikipedia'English61·2 days agoYeah, you’re right. My thoughts were kinda uncollected.
Though I will argue some of the negatives (like inference power usage) are massively overstated, and even if they aren’t, are just the result of corporate enshittification more than the AI bubble itself.
Even the large scale training is apparently largely useless: https://old.reddit.com/r/LocalLLaMA/comments/1mw2lme/frontier_ai_labs_publicized_100kh100_training/
brucethemoose@lemmy.worldto Technology@lemmy.world•Jimmy Wales Says Wikipedia Could Use AI. Editors Call It the 'Antithesis of Wikipedia'English232·2 days agoThis bubble’s hate is pretty front-loaded though.
Dotcom was, well, a useful thing. I guess valuations were nuts, but it looks like the hate was mostly in the enshittified aftermath that would come.
Crypto is a series of bubbles trying to prop up flavored pyramid schemes for a neat niche concept, but people largely figured that out after they popped. And it’s not as attention grabbing as AI.
Machine Learning is a long running, useful field, but ever since ChatGPT caught investors eyes, the cart has felt so far ahead of the horse. The hate started, and got polarized, waaay before the bubble popping.
…In other words, AI hate almost feels more political than bubble fueled. If that makes any sense. It is a bubble, but the extreme hate would still be there even if it wasn’t.
brucethemoose@lemmy.worldto Technology@lemmy.world•Jimmy Wales Says Wikipedia Could Use AI. Editors Call It the 'Antithesis of Wikipedia'English14·2 days agoWaves hands “You didn’t see anything.”
brucethemoose@lemmy.worldto Technology@lemmy.world•Jimmy Wales Says Wikipedia Could Use AI. Editors Call It the 'Antithesis of Wikipedia'English142·2 days agoNeither did Wales. Hence, the next part of the article:
For example, the response suggested the article cite a source that isn’t included in the draft article, and rely on Harvard Business School press releases for other citations, despite Wikipedia policies explicitly defining press releases as non-independent sources that cannot help prove notability, a basic requirement for Wikipedia articles.
Editors also found that the ChatGPT-generated response Wales shared “has no idea what the difference between” some of these basic Wikipedia policies, like notability (WP:N), verifiability (WP:V), and properly representing minority and more widely held views on subjects in an article (WP:WEIGHT).
“Something to take into consideration is how newcomers will interpret those answers. If they believe the LLM advice accurately reflects our policies, and it is wrong/inaccurate even 5% of the time, they will learn a skewed version of our policies and might reproduce the unhelpful advice on other pages,” one editor said.
It doesn’t mean the original process isn’t problematic, or can’t be helpfully augmented with some kind of LLM-generated supplement. But this is like a poster child of a troublesome AI implementation: where a general purpose LLM needs understanding of context it isn’t presented (but the reader assumes it has), where hallucinations have knock-on effects, and where even the founder/CEO of Wikipedia seemingly missed such errors.
Don’t mistake me for being blanket anti-AI, clearly it’s a tool Wikipedia can use. But the scope has to be narrow, and the problem specific.
brucethemoose@lemmy.worldto Technology@lemmy.world•Jimmy Wales Says Wikipedia Could Use AI. Editors Call It the 'Antithesis of Wikipedia'English1245·2 days agoWales’s quote isn’t nearly as bad as the byline makes it out to be:
Wales explains that the article was originally rejected several years ago, then someone tried to improve it, resubmitted it, and got the same exact template rejection again.
“It’s a form letter response that might as well be ‘Computer says no’ (that article’s worth a read if you don’t know the expression),” Wales said. “It wasn’t a computer who says no, but a human using AFCH, a helper script […] In order to try to help, I personally felt at a loss. I am not sure what the rejection referred to specifically. So I fed the page to ChatGPT to ask for advice. And I got what seems to me to be pretty good. And so I’m wondering if we might start to think about how a tool like AFCH might be improved so that instead of a generic template, a new editor gets actual advice. It would be better, obviously, if we had lovingly crafted human responses to every situation like this, but we all know that the volunteers who are dealing with a high volume of various situations can’t reasonably have time to do it. The templates are helpful - an AI-written note could be even more helpful.”
That being said, it still reeks of “CEO Speak.” And trying to find a place to shove AI in.
More NLP could absolutely be useful to Wikipedia, especially for flagging spam and malicious edits for human editors to review. This is an excellent task for dirt cheap, small and open models, where an error rate isn’t super important. Cost, volume, and reducing stress on precious human editors is. It’s a existential issue that needs work.
…Using an expensive, proprietary API to give error prone yet “pretty good” sounding suggestions to new editors is not.
Wasting dev time trying to make it work is not.
This is the problem. Not natural language processing itself, but the seemingly contagious compulsion among executives to find some place to shove it when the technical extent of their knowledge is occasionally typing something into ChatGPT.
It’s okay for them to not really understand it.
It’s not okay to push it differently than other technology because “AI” is somehow super special and trendy.
brucethemoose@lemmy.worldto Technology@lemmy.world•Chrome VPN Extension With 100k Installs Screenshots All Sites Users VisitEnglish2·3 days agoMany VPN companies post audits, and build up reputations. Not that I’d recommend it specificlly (since I only use it for a lifetime subscription I bought in a sale), but FastestVPN advertises the former.
…I guess it depends what you’re doing, too. If you’re, like, a government whistleblower, you might want to look into Mullad layered with something else instead of a more traditional commercial provider.
brucethemoose@lemmy.worldto Technology@lemmy.world•How the rise of Craigslist helped fuel America’s political polarizationEnglish1·3 days agoThe reduction in coverage was most pronounced before primary elections.
The reduction in staff covering politics made it harder for voters to differentiate between moderates and extremists in partisan primaries, and allowed extreme candidates to do better than they did before.
This makes sense, and it explains a lot, actually.
And to be clear, it’s not just craigslist as a culprit here, but it’s such a controlled A/B test that the effects are reliably measurable.
In the original paper, they also observed reduced turnout for House/Senate elections (which the article didn’t emphasize as much, but is defininitely there): https://academic.oup.com/restud/article/92/3/1738/7665573?login=false#517516514
brucethemoose@lemmy.worldto Technology@lemmy.world•THE NVIDIA AI GPU BLACK MARKET | Investigating Smuggling, Corruption, & GovernmentsEnglish1·5 days agoI don’t think LTT ever did or claimed to do investigative journalism (or really journalism at all)?
But thats how it’s presented, and that’s how many viewers interpet it.
brucethemoose@lemmy.worldto Lemmy Shitpost@lemmy.world•"I have an story/fanfiction idea ruminating my mind for a long time. I wish I could write it "0·9 months agoI suspect Lemmy is going to dislike this, but local LLMs are great writing helpers.
When you’re stuck on a sentence or a blank paragraph, get them to continue it, and rewrite it once it jogs your mind. If you’re drafting ideas for characters or chapters, you can sanity check them or sometimes get ideas. They can reword and clean up your writing and improve it beyond what self experimentation can do… just keeping in mind that its like an idiot intern that tends to repeat itself and hallucinate.
And this is very different from an API model like ChatGPT because:
-
It won’t refuse you.
-
It’s formatted as a notebook you can continue at any arbitrary point, rather than a user/chatbot type format.
-
The writing isn’t so dry, and it isn’t filled with AI slop, especially with cutting edge sampling
-
All its knowledge is “internal,” with no reaching out to the web or hidden prompting under your nose
Along with all the usual reasons, namely being free, self-hosted, more ethically trained, fast and efficient with long context thanks to caching, and has nothing to do with Sam Altman’s nightmarish visions.
I’d recommend: https://huggingface.co/nbeerbower/Qwen2.5-Gutenberg-Doppel-32B
And once the story gets long: https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2
I’d recommend LanguageTool (with a local server and the browser extension) for locally hosted spelling/grammar/style as well.
I have ADD, which may be why I find this setup to be so therapeutic.
-
Rub his belly! And chin. Immediately.