Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:Humanities professor here (Score 1) 61

AI is not a shoulder. It's a make-believe sentence generator, which has encoded a large collection of sentences discussing homework (*). That's actually counterproductive to stand on, because it wastes time and builds weakness into the foundation of science.

(*) On the Internet. Badly.

Comment Re:Bubble #4 (Score 3, Insightful) 76

It is likely that the next algorithmic improvement will happen in 30 years time at the very earliest. Until then we are likely going to be stuck in the dark ages of transformers and Markov chains calling themselves LLMs and pretending to be intelligent.

Research operates on cycles of research careers. Paradigm shifts can happen when most of the current population of researchers stop working in AI and thereby stop flooding the world with variations on the same brute force trick. This will give a new generation of students breathing room to be actually creative and innovative. But before any of that happens, people like Musk and Zuckerberg need to run out of money or die. When the money and interest dries up, AI research will no longer be attractive. That's when the dedicated, methodical kids with a real interest in the field will get their chance to shine.

Comment Re:Not just defensive (Score 2) 49

Part of it is learning to be diplomatic with ignorant people such as those you mention. Don't say: you're wrong, the book you are looking for doesn't exist. Say instead: sorry, the library computer can't find it rigght now, maybe it was misfiled, come back another day. You will seem helpful and mildly incompetent to them, and then they will go way.

Comment Re:You're Totally Right (Score 1) 24

Sometimes you have to break a few eggs to make an omelette. The wider issue is: what is the acceptable tradeoff between false positives and false negatives that keeps the slop in check for everyone? It is clearly not to err on the side of no false positives at all. That's merely sweeping the problem under the rug.

The people you mention who didn't use AI are essentially victims of the AI cheaters whose behaviour causes predictable countermeasures. Just like the wider journal readership are victims, who are being hoodwinked with fake papers and fraudulent datasets.

Comment Re:King George the Third... (Score 4, Insightful) 263

Do not assume that the maggots are fully autonomous idiots.

In other parts of the world, there are fledgling maggot movements too. What is particularly interesting and relevant about those is that they often quote some ideas and misconceptions that simply do not apply where those movements are forming. This is due to cultural and legal differences in the other countries.

You can see this by observing marches and protests and interviews in other countries. The slogans and demands just don't make sense locally most of the time, yet are carbon copies of American ideas.

This tells you two things: 1) the maggots in America and abroad are being paid to propagate the conservative hate speech in their own countries. 2) the groups who are paying them are Americans, because the talking points are American conservative talking points even in the rest of the world where it makes no sense. The local maggot movements are simply paid to propagate the American talking points in their local cultures, and nobody bothers to adapt them or see if they make sense at all.

The last thing this tells you is this: if you follow the money to the source, then you will know who needs to be stopped for the good of the world. When the payola stops, the movements will stop. The ball is in Americans' court (For now. Don't sit on your ass too long).

Comment Re:What's the difference between tablet and phone? (Score 3, Interesting) 122

Back when the iPhone was introduced I was convinced that within 10 years computing would be mostly done this way; connecting your portable computer (smart phone) to a dock that turned it into your home computer. I'm surprised that this idea never gained traction.

I think there have been a few reasons for this.

I think the biggest one is that nobody could meaningfully agree on a form factor. Now, *I* always thought that a great option would be to have a 'zombie laptop' that had a keyboard, trackpad, webcam, and a battery, with a slot to slide your phone into. The phone would connect to the peripherals and give a 12" screen and a keyboard, while charging the phone in the process.

The devil, of course, was in the details. Even if Apple made such a device and molded it to the iPhone, the problem then became that a user couldn't put their phone in a case, or it wouldn't fit in the clamshell's phone slot. There would also need to be adapters to fit the different sized phones, or different SKUs entirely with per-device slots, which then also pigeonholes Apple into a particular physical form factor. That begets the "use a C-to-C cable" option, which is better, but makes it ergonomically annoying to use if one isn't sitting at a desk. A wireless option solves both of these problems, but kills both batteries in the process. Finally, there's the price point: the cost for the end user would need to be low enough that it doesn't just make sense to have two devices, AND the first-gen owners would likely feel some kind of way if they were stuck with their old phone because it meant buying a new clamshell. It works well on paper, but pretty much any real-world testing would show the shortcomings pretty quickly.

Supposed that was solved somehow...while the Samsung Fold phones are helping justify time spent in adding a multi-window interface to Android, try installing Android x86 on a VM for a bit and watch what happens. It's been a while since I tried, but the experience was pretty bad - the inability to open e-mails in new windows was particularly infuriating; many apps take exception to having multiple concurrent instances for side-by-side usage, and window focus gets pretty tricky to navigate. It *can* be done, but it ultimately felt like all-compromise, no-improvement.

Finally, there *is* such a thing, at least to an extent. Many, MANY apps are just frontends on a website. iCloud is like this, the whole Google ecosystem is like this, Salesforce is like this...for a solid number of apps, there is a browser-based frontend that works just as well, if not better in at least some cases. Data is commonly synced with Google or iCloud or Dropbox. The number of apps that are worth running on a phone, without a desktop or browser analogue, that would justify a user getting a clamshell to run that app in a larger window...is small enough that it is seldom worth dealing with all of the *other* compromises involved.

Comment Re:Either the recordings are still available or no (Score 1) 41

This page claims over 400,000 recordings but links to a listing of only 187,034 audio files. I'm guessing the discrepancy is the girth of the suit: IA agreed to take down the files that the plaintiffs could prove were theirs and no money changed hands.

Comment Exactly Forward (Score 1) 39

I don't give a shit if some Russian/Kazakh/Malaysian bot farmer wants to take over my phone.

So you do no banking on your phone? Unlikely.

For the 99% of people that do in fact use a phone for banking, protection from lower level criminals is invaluable. For most people there is real financial loss possible from a phone being taken over, at the very least to monitor banking access mechanisms.

Comment Re:uncover overlooked or never-considered patterns (Score 1) 17

The deep learning revolution did not solve the problem you claim. What deep learning does is allow more complex piecewise linear functions to be modelled efficiently (if you use relu that is, which is the most popular activation (*)). That's both a blessing and a curse.

What actually happened in the deep learning revolution is that humans solved the problem of designing basic features over many generations of papers and progressively simplified the solution, discovering what is important and what isn't. The algorithms were weeded out until the point we are now, which is that data input is matched to algorithm, in this case the algorithm of choice is of deep learning type. It only looks like deep learning is good for every dataset, but it's not true.

For example, in vision problems, try training a deep network on input that is not in the form of pixels and not in the form of multiple color planes. It will fail miserably, the quality of recognition will be abysmal. That's why data design is so important, you have to know what the strengths of the AI model actually are. In this case, the statistical regularities between neighbouring pixels are what is enabling the CNN layers to extract information. These regularities are an artefact of choosing to stack pixels and colour planes into a rectangular grid. That's solving most of the problem.

Now pixels didn't always exist, they were invented by people quite recently. Try looking up TV technologies of the 1930s and you'll find that it's all about deflecting electron beams. There's really nothing natural about pixels, it's just what our current technologies are based on. And so there's nothing natural about what a deep network does either, it's just a system that has been selected for fitness against our current tech stack, for a handful of high value problem domains. It doesn't imply anything about other problem domains that haven't been studied so intensively.

(*) if you don't use relu but some other smooth activation family for your deep network, then there will always be a close piecewise linear approximation, as these functions are dense. So it's not a big loss of generality to assume relu everywhere.

Slashdot Top Deals

"Bond reflected that good Americans were fine people and that most of them seemed to come from Texas." - Ian Fleming, "Casino Royale"

Working...