Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Submission + - 'Reddit Is Taking Over Google' (businessinsider.com)

An anonymous reader writes: If you think you've been seeing an awful lot more Reddit results lately when you search on Google, you're not imagining things. The internet is in upheaval, and for website owners the rules of "winning" Google Search have never been murkier. Google's generative AI search engine is coming from one direction. It's creeping closer to mainstream deployment and bringing an existential crisis for SEOs and website makers everywhere. Coming from the other direction is an influx of posts from Reddit, Quora, and other internet forums that have climbed up through the traditional set of Google links. Data analysis from Semrush, which predicts traffic based on search ranking, shows that traffic to Reddit has climbed at an impressive clip since August. Semrush estimated that Reddit had over 132 million visitors in August 2023. At the time of publishing, it was projected to have over 346 million visitors in April 2024.

None of this is accidental. For years, Google has been watching users tack on "Reddit" to the end of search queries and finally decided to do something about it. Google started dropping hints in 2022 when it promised to do a better job of promoting sites that weren't just chasing the top of search but were more helpful and human. Last August, Google rolled out a big update to Search that seemed to kick this into action. Reddit, Quora, and other forum sites started getting more visibility in Google, both within the traditional links and within a new "discussions and forums" section, which you may have spotted if you're US-based. The timing of this Reddit bump has led to some conspiracy theories. In February, Google and Reddit announced a blockbuster deal that would let Google train its AI models on Reddit content. Google said the deal, reportedly worth $60 million, would "facilitate more content-forward displays of Reddit information," leading to some speculation that Google promised Reddit better visibility in exchange for the valuable training data. A few weeks later, Reddit also went public.

Steve Paine, marketing manager at Sistrix, called the rise of Reddit "unprecedented." "There hasn't been a website that's grown so much search visibility so quickly in the US in at least the last five years," he told Business Insider. Right now, Reddit ranks high for product searches. Reddit's main competitors are Wikipedia, YouTube, and Fandom, Paine said, and it also competes in "high-value commercial searches," putting it up against Amazon. The "real competitors," he said, are the subreddits that compete with brands on the web.

Privacy

Cops Can Force Suspect To Unlock Phone With Thumbprint, US Court Rules (arstechnica.com) 102

An anonymous reader quotes a report from Ars Technica: The US Constitution's Fifth Amendment protection against self-incrimination does not prohibit police officers from forcing a suspect to unlock a phone with a thumbprint scan, a federal appeals court ruled yesterday. The ruling does not apply to all cases in which biometrics are used to unlock an electronic device but is a significant decision in an unsettled area of the law. The US Court of Appeals for the 9th Circuit had to grapple with the question of "whether the compelled use of Payne's thumb to unlock his phone was testimonial," the ruling (PDF) in United States v. Jeremy Travis Payne said. "To date, neither the Supreme Court nor any of our sister circuits have addressed whether the compelled use of a biometric to unlock an electronic device is testimonial."

A three-judge panel at the 9th Circuit ruled unanimously against Payne, affirming a US District Court's denial of Payne's motion to suppress evidence. Payne was a California parolee who was arrested by California Highway Patrol (CHP) after a 2021 traffic stop and charged with possession with intent to distribute fentanyl, fluorofentanyl, and cocaine. There was a dispute in District Court over whether a CHP officer "forcibly used Payne's thumb to unlock the phone." But for the purposes of Payne's appeal, the government "accepted the defendant's version of the facts, i.e., 'that defendant's thumbprint was compelled.'" Payne's Fifth Amendment claim "rests entirely on whether the use of his thumb implicitly related certain facts to officers such that he can avail himself of the privilege against self-incrimination," the ruling said. Judges rejected his claim, holding "that the compelled use of Payne's thumb to unlock his phone (which he had already identified for the officers) required no cognitive exertion, placing it firmly in the same category as a blood draw or fingerprint taken at booking." "When Officer Coddington used Payne's thumb to unlock his phone -- which he could have accomplished even if Payne had been unconscious -- he did not intrude on the contents of Payne's mind," the court also said.

The Almighty Buck

Software Glitch Saw Aussie Casino Give Away Millions In Cash 16

A software glitch in the "ticket in, cash out" (TICO) machines at Star Casino in Sydney, Australia, saw it inadvertently give away $2.05 million over several weeks. This glitch allowed gamblers to reuse a receipt for slot machine winnings, leading to unwarranted cash payouts which went undetected due to systematic failures in oversight and audit processes. The Register reports: News of the giveaway emerged on Monday at an independent inquiry into the casino, which has had years of compliance troubles that led to a finding that its operators were unsuitable to hold a license. In testimony [PDF] given on Monday to the inquiry, casino manager Nicholas Weeks explained that it is possible to insert two receipts into TICO machines. That was a feature, not a bug, and allowed gamblers to redeem two receipts and be paid the aggregate amount. But a software glitch meant that the machines would return one of those tickets and allow it to be re-used -- the barcode it bore was not recognized as having been paid.

"What occurred was small additional amounts of cash were being provided to customers in circumstances when they shouldn't have received it because of that defect," Weeks told the inquiry. Local media reported that news of the free cash got around and 43 people used the TICO machines to withdraw money to which they were not entitled -- at least one of them a recovering gambling addict who fell off the wagon as the "free" money allowed them to fund their activities. Known abusers of the TICO machines have been charged, and one of those set to face the courts is accused of association with a criminal group. (The first inquiry into The Star, two years ago, found it may have been targeted by organized crime groups.)
AI

Meta Is Adding Real-Time AI Image Generation To WhatsApp 10

WhatsApp users in the U.S. will soon see support for real-time AI image generation. The Verge reports: As soon as you start typing a text-to-image prompt in a chat with Meta AI, you'll see how the image changes as you add more detail about what you want to create. In the example shared by Meta, a user types in the prompt, "Imagine a soccer game on mars." The generated image quickly changes from a typical soccer player to showing an entire soccer field on a Martian landscape. If you have access to the beta, you can try out the feature for yourself by opening a chat with Meta AI and then start a prompt with the word "Imagine."

Additionally, Meta says its Meta Llama 3 model can now produce "sharper and higher quality" images and is better at showing text. You can also ask Meta AI to animate any images you provide, allowing you to turn them into a GIF to share with friends. Along with availability on WhatsApp, real-time image generation is also available to US users through Meta AI for the web.
Further reading: Meta Releases Llama 3 AI Models, Claiming Top Performance

Comment Re: months of coding training and a half-year Web/ (Score 2) 37

months of coding training and a half-year Web/VoTech degree can tech more then an 4 year theory loaded school.

Except for the theory parts, which can come in handy if you want to get past being just a code monkey. There's also plenty of coding done during a 4-year degree program -- at least there was in mine. And I was a grader. My OS class had us simulate an interactive operating system and another class had us write a functional linking loader, both in C. (my concentration, back in the mid '80s, was operating systems design) I also took classes using LISP, Pascal and x86 assembly (on new PCs as the printer for the IBM 370 caught fire and destroyed everything in the room the previous summer) I was also a research assistant doing programming in LISP (on a Xerox 1108) and Prolog for a NASA grant on automated programming techniques -- they wanted a grad student, but couldn't find one with LISP experience. I also ported programs, like the Franz LISP interpreter/compiler, from 4.3BSD on a VAX 11/785 to SunOS on a Sun4 and debugged lpd -- we had BSD source code. I also had a jerk system administrator on those BSD/Sun systems who made us RTFM *and* the BSD source code before he would answer even the simplest question -- and I have to thank him for that very much.

Pretty sure you're not going to get all that at a six-month coding boot camp. People keep dismissing the value of a 4-year CS degree, but a lot of you get out of it is what you're willing to put into it.

Privacy

Colorado Bill Aims To Protect Consumer Brain Data (nytimes.com) 13

An anonymous reader quotes a report from the New York Times: Consumers have grown accustomed to the prospect that their personal data, such as email addresses, social contacts, browsing history and genetic ancestry, are being collected and often resold by the apps and the digital services they use. With the advent of consumer neurotechnologies, the data being collected is becoming ever more intimate. One headband serves as a personal meditation coach by monitoring the user's brain activity. Another purports to help treat anxiety and symptoms of depression. Another reads and interprets brain signals while the user scrolls through dating apps, presumably to provide better matches. ("'Listen to your heart' is not enough," the manufacturer says on its website.) The companies behind such technologies have access to the records of the users' brain activity -- the electrical signals underlying our thoughts, feelings and intentions.

On Wednesday, Governor Jared Polis of Colorado signed a bill that, for the first time in the United States, tries to ensure that such data remains truly private. The new law, which passed by a 61-to-1 vote in the Colorado House and a 34-to-0 vote in the Senate, expands the definition of "sensitive data" in the state's current personal privacy law to include biological and "neural data" generated by the brain, the spinal cord and the network of nerves that relays messages throughout the body. "Everything that we are is within our mind," said Jared Genser, general counsel and co-founder of the Neurorights Foundation, a science group that advocated the bill's passage. "What we think and feel, and the ability to decode that from the human brain, couldn't be any more intrusive or personal to us." "We are really excited to have an actual bill signed into law that will protect people's biological and neurological data," said Representative Cathy Kipp, Democrat of Colorado, who introduced the bill.

Comment Ya, but (Score 1) 37

The business, which claims on its site it will help students land their "dream job" in tech at companies like Amazon, Cisco, and Google, ...

Con or not, some of this is on the students. I mean, those companies aren't really known for hiring people with only a few months of coding training and a half-year Web/VoTech degree. That said, taking advantage of their gullibility and/or desperation isn't cool.

Submission + - Cops Can Force Suspect To Unlock Phone With Thumbprint, US Court Rules (arstechnica.com)

An anonymous reader writes: The US Constitution's Fifth Amendment protection against self-incrimination does not prohibit police officers from forcing a suspect to unlock a phone with a thumbprint scan, a federal appeals court ruled yesterday. The ruling does not apply to all cases in which biometrics are used to unlock an electronic device but is a significant decision in an unsettled area of the law. The US Court of Appeals for the 9th Circuit had to grapple with the question of "whether the compelled use of Payne's thumb to unlock his phone was testimonial," the ruling (PDF) in United States v. Jeremy Travis Payne said. "To date, neither the Supreme Court nor any of our sister circuits have addressed whether the compelled use of a biometric to unlock an electronic device is testimonial."

A three-judge panel at the 9th Circuit ruled unanimously against Payne, affirming a US District Court's denial of Payne's motion to suppress evidence. Payne was a California parolee who was arrested by California Highway Patrol (CHP) after a 2021 traffic stop and charged with possession with intent to distribute fentanyl, fluorofentanyl, and cocaine. There was a dispute in District Court over whether a CHP officer "forcibly used Payne's thumb to unlock the phone." But for the purposes of Payne's appeal, the government "accepted the defendant's version of the facts, i.e., 'that defendant's thumbprint was compelled.'"

Payne's Fifth Amendment claim "rests entirely on whether the use of his thumb implicitly related certain facts to officers such that he can avail himself of the privilege against self-incrimination," the ruling said. Judges rejected his claim, holding "that the compelled use of Payne's thumb to unlock his phone (which he had already identified for the officers) required no cognitive exertion, placing it firmly in the same category as a blood draw or fingerprint taken at booking." "When Officer Coddington used Payne's thumb to unlock his phone—which he could have accomplished even if Payne had been unconscious—he did not intrude on the contents of Payne's mind," the court also said.

Submission + - Colorado Bill Aims To Protect Consumer Brain Data (nytimes.com)

An anonymous reader writes: Consumers have grown accustomed to the prospect that their personal data, such as email addresses, social contacts, browsing history and genetic ancestry, are being collected and often resold by the apps and the digital services they use. With the advent of consumer neurotechnologies, the data being collected is becoming ever more intimate. One headband serves as a personal meditation coach by monitoring the user’s brain activity. Another purports to help treat anxiety and symptoms of depression. Another reads and interprets brain signalswhile the user scrolls through dating apps, presumably to provide better matches. (“‘Listen to your heart’ is not enough,” the manufacturer says on its website.) The companies behind such technologies have access to the records of the users’ brain activity — the electrical signals underlying our thoughts, feelings and intentions.

On Wednesday, Governor Jared Polis of Colorado signed a bill that, for the first time in the United States, tries to ensure that such data remains truly private. The new law, which passed by a 61-to-1 vote in the Colorado House and a 34-to-0 vote in the Senate, expands the definition of “sensitive data” in the state’s current personal privacy law to include biological and “neural data” generated by the brain, the spinal cord and the network of nerves that relays messages throughout the body. “Everything that we are is within our mind,” said Jared Genser, general counsel and co-founder of the Neurorights Foundation, a science group that advocated the bill’s passage. “What we think and feel, and the ability to decode that from the human brain, couldn’t be any more intrusive or personal to us.” “We are really excited to have an actual bill signed into law that will protect people’s biological and neurological data,” said Representative Cathy Kipp, Democrat of Colorado, who introduced the bill.

Comment Re:AI Incest (Score 2, Interesting) 38

Yes, "you've been told" that by people who have no clue what they're talking about. Meanwhile, models just keep getting better and better. AI images have been out for years now. There's tons on the net.

First off, old datasets don't just disappear. So the *very worst case* is that you just keep developing your new models on pre-AI datasets.

Secondly, there is human selection on things that get posted. If humans don't like the look of something, they don't post it. In many regards, an AI image is replacing what would have been a much crapper alternative choice.

Third, dataset gatherers don't just blindly use a dump of the internet. If there's a place that tends to be a source of crappy images, they'll just exclude or downrate it.

Fourth, images are scored with aesthetic gradients before they're used. That is, humans train models to assess how much they like images, and then those models look at all the images in the dataset and rate them. Once again, crappy images are excluded / downrated.

Fifth, trainers do comparative training and look at image loss rates, and an automatically exclude problematic ones. For example, if you have a thousand images labeled "watermelon" but one is actually a zebra, the zebra will have an anomalous loss spike that warrants more attention (either from humans or in an automated manner). Loss rates can also be compared between data +sources+ - whole websites or even whole datasets - and whatever is working best gets used.

Sixth, trainers also do direct blind human comparisons for evaluation.

This notion that AIs are just going to get worse and worse because of training on AI images is just ignorant. And demonstrably false.

Comment Re:Cue all the people acting shocked about this... (Score 4, Interesting) 38

As for why I think the ruling was bad: their argument was that because the person doesn't control the exact details of the composition of the work, than the basic work (before postprocessing or selection) can't be copyrighted. But that exact same thing applies to photography, outside of studio conditions. Ansel Adams wasn't out there going, "Okay, put a 20 meter oak over there, a 50 meter spruce over there, shape that mountain ridge a bit steeper, put a cliff on that side, cover the whole thing with snow... now add a rainbow to the sky... okay, cue the geese!" He was searching the search space for something to match a general vision - or just taking advantage of happenstance findings. And sure, a photographer has many options at their hands in terms of their camera and its settings, but if you think that's a lot, try messing around with AUTOMATIC1111 with all of its plugins some time.

The winner of Nature Photographer of the year in 2022 was Dmitry Kokh, with "House of Bears". He was stranded on a remote Russian archipelago and discovered that polar bears had moved into an abandoned weather station, and took photos of them. He didn't even plan to be there then. He certainly didn't plan on having polar bears in an abandoned weather station, and he CERTAINLY wasn't telling the bears where to stand and how to pose. Yet his work is a classic example of what the copyright office thinks should be a copyrightable work.

And the very notion that people don't control the layout with AI art is itself flawed. It was an obsolete notion even when they made their ruling - we already had img2img, instructpix2pix and controlnet. The author CAN control the layout, down to whatever level of intricate detail they choose. Unlike, say, a nature photographer. And modern models give increasing levels of control even with the prompt itself - with SD3 (unlike SD1/2 or SC) - you can do things like "A red sphere on a blue cube to the left of a green cone" . We're heading to - if not there already - where you could write a veritable short story's worth of detail to describe a scene.

I find it just plain silly that Person A could grab their cell phone and spend 2 seconds snapping a photo of whatever happens to be out their window, and that's copyrightable, but a person who spends hours searching through the latent space - let alone with ControlNet guidance (controlnet inputs can be veritable works of art in their own right) - isn't given the same credit for the amount of creative effort put into the work.

I think, rather, it's very simple: the human creative effort should be judged not on the output of the work (the work is just a transformation of the inputs), but the amount of creative effort they put into said inputs. Not just on the backend side - selection, postprocessing, etc - but on the frontend side as well. If a person just writes "a fluffy dog" and takes the first pic that comes up, obviously, that's not sufficient creative endeavour. But if a person spends hours on the frontend in order to get the sort of image they want, why shouldn't that frontend work count? Seems dumb to me.

Slashdot Top Deals

E = MC ** 2 +- 3db

Working...