Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Comment Re:Reality if Warmer than you Think (Score 0) 35

You should Google how often the world mocks the UK for their "heat warnings" or whatever you call them over there, rather than listing all time heat records. Great job pointing out the us and uk use different measurement systems, literally nobody knows that, so it was very useful to learn today, thank you for that deep insight. You will surely be the person to turn around the uk's declining fortunes. Stiff upper lip, I'm rooting for you.

Comment Re:Yes, but (Score 1) 30

You should check out the latest state of the art stuff coming out of Google and OpenAI; they're releasing stuff that runs just fine on an 8gb consumer gpu and performs as good as summer 2024 models, with offload to system ram through new architectural techniques. A "20b" model will run performant on a 8gb card these days, provided you have 16gb system memory. You don't even need a good or modern gpu just wicked fast VRAM. Google has released some gemma and Gemini models in the 270m to 4b (that 'm' isn't a typo) that are functional for running lights, thermostat, kitchen timers etc that can and do run on raspberry pi hardware now.

Comment Re:Yes, but (Score 1) 30

The devices with screens (that can show you ads) tend to redirect you to an external website, whereas the hockey puck screenless devices tend to answer the question directly. We don't use the screen devices for much beyond turning lights on and off anymore as their answers involve looking at the screen. I'm looking forward to switching to a separate private hosted llm solution as soon as good hardware becomes available that's not a raspberry pi in a 3d printed case

Comment Re: This is so funny (Score 1) 373

It is pretty hard not to respond to the pure BS that anti-EV types spout. I know it rubs you the wrong way, but the alternative is to let people who don't know what they're talking about dominate public perceptions.

I wouldn't claim EVs are for everyone, but for many of us they are extremely convenient and economical to run. The corner cases where ICE is clearly more convenient are not a concern for everyone, and not a concern for a multi-car household considering making one of their cars an EV. We have an EV and a plug-in hybrid that runs as an EV probably 80% of the time. We hit the gas station with the plug-in about once every six weeks.

Comment Why do people have jobs in the first place? (Score 1) 34

I heard an economist pose this question once. Why do companies have employees at all? Why not use contractors? Then you could hire just as much labor as you need, when you need it, then not pay for labor when you didn't need it.

His reason was the costs involved with finding contractors then negotiating agreements with them. I think there are other reasons, but for sure that's part of it.

But I think technology is pushing us into an intermediate position between the semi-permanent, often lifelong employment of a generation ago, and a world of contracting for everything. I think this is evidenced by a pattern I have seen where companies who are currently successful lay people off. It's not just in the tech world, this is happening in the service industry too.

When technology allows you to monitor the financial performance and cost of every department in an enterprise down to a fare-thee-well, it's easy to identify people you don't need so much in the upcoming quarters and let them go. Then with Internet hiring and automated application screening it's easy to hire those positions back in a year.

Now there's a lot of holes in this rosy (for management) scenario. Automated application screening is dog shit, for example. But you can do it, and you will find people; probably not the *best* people, but then you'll never know, in fact *nobody* will ever know. People will never get to know their jobs well, but again you won't ever know what you're missing. Most of all you will never have anything resembling loyalty from the people you hire; young people these days look at every job as transient. But you can't *measure* loyalty and in most cases, job competence with any precision. But you can track costs down to the penny.

Comment Re:Not unexpected (Score 2) 37

In this case this wasn't about AI underperforming what was promised, but AI performance being exaggerated to cover the company's tracks as it offshored jobs to India. The intent was to use AI as an excuse to let Australian workers go, then to quietly replace them with Indian ones.

I don't think AI promises are "empty", but there is a lot of irrational enthusiasm out there getting ahead of the technology. I think for sure there are plenty of technical failures arising from technlogical hubris and naivite. And I think more instances where the technology is blamed for company failures or unpopular policies -- that practice goes back to the very early era of "computerizing" things like invoicing, so I don't see why this round of technological change would be any different.

But for sure, AI is coming for a lot of jobs. Past forms of automation haven't ended employment; they were just ways of increasing worker productivity. Companies still hired workers until the next marginal dollar spent wouldn't bring in a marginal dollar of revenue. But this time may be different. AI is replacing human thinking. It may be mediocre at thinking, but so are most humans. It may be an opportunity for companies to leverage a small number of humans with advanced cognitive skills, but I think for many companies the siren call of mediocre but really cheap will be too hard to resist.

Comment Re: The AI is not the problem (Score 1) 93

I think this gets to the old debate about language learning vs acquisition. If you learn the gender of the noun âoe MÃdchenâ, that will prevent you from making errors, which is a good thing. But the language acquisition approach doesnâ(TM)t worry about you making mistakes. If youâ(TM)re exposed enough to the word being used correctly, âoedie MÃdchenâ eventually just sounds wrong. You will have acquired the gender of the noun without technically learning it. You donâ(TM)t even have to understand that nouns have gender.

It sounds great, but there is no way youâ(TM)re going to acquire enough German this way playing a game a few minutes a day to have a functional level of German in a short time, say for an upcoming trip. The company isnâ(TM)t as up front about this as they should be, but common sense should tell you that.

Duolingo is a way of putting time youâ(TM)re spending on useless phone activities like playing Candy Crush towards something useful. After say two years spending fifteen minutes a day on Duolingo French, youâ(TM)ll be able to read things like the train schedules in France, get the gist of simple newspaper articles, understand people who speak slowly and distinctly about things like directions to tourist sites. In other words a useful amount of French. Youâ(TM)ll have a leg up (I suppose) on more intensive ways of learning French.

But the idea you will reach B2 proficiency with Duolingo seems far fetched to me, given that Iâ(TM)m working on B1 and doubt I can pass the A2 exam. Youâ(TM)ll have covered the material by the end of the course, sure, but at this point itâ(TM)s pretty clear to me that actually mastering it requires actually communicating with fluent French speakers.

Which is fine. Nobody is stopping you from using more effective ways of learning. Duolingoâ(TM)s job, and its economic incentive, is to keep you engaged. This is why course content quality is important, and building courses out of AI slop is counterproductive. As a game, Duolingo isnâ(TM)t that much fun that youâ(TM)d play it even if the content is bad.

Comment Re: The AI is not the problem (Score 2) 93

Thereâ(TM)s a huge difference in quality between Duolingo language courses. The flagship Spanish and French courses have by far the most material, human made material at that including native speaker voice actors. In about two years of studying both languages I have got to the point where I can follow along TV shows and read simple materials which I consider reasonable payback for effort I donâ(TM)t feel I got much out of the German course which I took at the same time, however German is a distant third in course quantity and quality. The AI features in the flagship courses are hit or miss, but generally good enough to be useful.

Reportedly the company is using AI generated content to pad out or even completely create less popular courses, but I literally havenâ(TM)t heard anyone who has good things to say about that stuff.

Judicious use of AI makes sense in an otherwise human deigned course (eg trying to respond to ad hoc user input conversationally). But betting the company on mostly AI generated courses does seem like a recipe for getting crushed by players like Google or OpenAI with vast and advanced models.

Comment Re:Crash and burn, or rise and conquer (Score 1) 56

A lot of these smaller 3-12 person companies will develop some proprietary tech on top of (probably agnostic) state of the art models and rather than sell the product they just maintain it for their existing customers as a professional services company. Something that solves a difficult to solve problem, but has to be uniquely wired into each client's system differently, and then add/tweak features per customer.

Comment Re:I'll leave for another product (Score 1) 49

> It also shows me that locally running LLM are not nearly as bad as some portray these to be.
 
Well, also, things have improved a lot since last summer, and there's been a lot of work lately on using archetctutal strategies in flagship models, in small and micro models. A 270m model today is about as good as a 4b model was two years ago. Back then a 4b model couldn't write a haiku or sonnet, now a 270m will at least make an attempt on par with a 6th grader even if it isn't exactly perfect. 270m sure seems like the lower limit to get coherent responses from and do basic analysis. But it's enough to do tasks like check the time weather and date, set timers and alarms, and know when it's out of its depth and hand off to a larger model. 270m is probably just small enough to run on a raspberry pi with half a gig of RAM

Comment Re:I'll leave for another product (Score 1) 49

You can self host a pretty competent llm on an 8gb gpu and a real competent llm on a 16gb these days. A 30b model might only hold a couple layers in gpu memory but still be fast enough for daily use. Pretty soon if you have 64gb system and 16-32gb gpu you'll be able to run in a year or two summer-2025 state of the art models locally. You can already run Winter 2024 state of the art models on consumer hardware. The only thing self hosted doesn't have access to yet is search tools

Slashdot Top Deals

How many hardware guys does it take to change a light bulb? "Well the diagnostics say it's fine buddy, so it's a software problem."

Working...