AI

Nikon, Sony and Canon Fight AI Fakes With New Camera Tech (nikkei.com) 109

Nikon, Sony Group and Canon are developing camera technology that embeds digital signatures in images so that they can be distinguished from increasingly sophisticated fakes. From a report: Nikon will offer mirrorless cameras with authentication technology for photojournalists and other professionals. The tamper-resistant digital signatures will include such information as date, time, location and photographer. Such efforts come as ever-more-realistic fakes appear, testing the judgment of content producers and users alike.

An alliance of global news organizations, technology companies and camera makers has launched a web-based tool called Verify for checking images free of charge. If an image has a digital signature, the site displays date, location and other credentials. The digital signatures now share a global standard used by Nikon, Sony and Canon. Japanese companies control around 90% of the global camera market. If an image has been created with artificial intelligence or tampered with, the Verify tool flags it as having "No Content Credentials."

Google

Google Debuts Imagen 2 With Text and Logo Generation (techcrunch.com) 13

Google's making the second generation of Imagen, its AI model that can create and edit images given a text prompt, more widely available -- at least to Google Cloud customers using Vertex AI who've been approved for access. From a report: But the company isn't disclosing which data it used to train the new model -- nor introducing a way for creators who might've inadvertently contributed to the data set to opt out or apply for compensation.

Called Imagen 2, Google's enhanced model -- which was quietly launched in preview at the tech giant's I/O conference in May -- was developed using technology from Google DeepMind, Google's flagship AI lab. Compared to the first-gen Imagen, it's "significantly" improved in terms of image quality, Google claims (the company bizarrely refused to share image samples prior to this morning), and introduces new capabilities including the ability to render text and logos. "If you want to create images with a text overlay -- for example, advertising -- you can do that," Google Cloud CEO Thomas Kurian said during a press briefing on Tuesday.

Science

Archaeologists Unearth a Secret Lost Language From 3,000 Years Ago (sciencealert.com) 123

"And no, it's not COBOL," jokes long-time Slashdot reader schwit1, sharing this report from ScienceAlert: A secret text has been discovered in Türkiye, scattered among tens of thousands of ancient clay tablets, which were written in the time of the Hittite Empire during the second millennium BCE. No one yet knows what the curious cuneiform script says, but it seems to be a long-lost language from more than 3,000 years ago.

Experts say the mysterious idiom is unlike any other ancient written language found in the Middle East, although it seems to share roots with other Anatolian-Indo-European languages. The sneaky scrawlings start at the end of a cultic ritual text written in Hittite — the oldest known Indo-European tongue — after an introduction that essentially translates to: "From now on, read in the language of the country of Kalasma"... Currently, there are no available photos of the newly discovered tablet with Kalamaic writings, as experts are still working out how to translate it. Schwemer and his colleagues hope to publish their results along with images of their discovery sometime next year.

AI

Artists May 'Poison' AI Models Before Copyright Office Can Issue Guidance (arstechnica.com) 66

An anonymous reader writes: Artists have spent the past year fighting companies that have been training AI image generators—including popular tools like the impressively photorealistic Midjourney or the ultra-sophisticated DALL-E 3—on their original works without consent or compensation. Now, the United States has promised to finally get serious about addressing their copyright concerns raised by AI, President Joe Biden said in his much-anticipated executive order on AI, which was signed this week. The US Copyright Office had already been seeking public input on AI concerns over the past few months through a comment period ending on November 15. Biden's executive order has clarified that following this comment period, the Copyright Office will publish the results of its study. And then, within 180 days of that publication—or within 270 days of Biden's order, "whichever comes later"—the Copyright Office's director will consult with Biden to "issue recommendations to the President on potential executive actions relating to copyright and AI."

"The recommendations shall address any copyright and related issues discussed in the United States Copyright Office's study, including the scope of protection for works produced using AI and the treatment of copyrighted works in AI training," Biden's order said. That means that potentially within the next six to nine months (or longer), artists may have answers to some of their biggest legal questions, including a clearer understanding of how to protect their works from being used to train AI models. Currently, artists do not have many options to stop AI image makers—which generate images based on user text prompts—from referencing their works. Even companies like OpenAI, which recently started allowing artists to opt out of having works included in AI training data, only allow artists to opt out of future training data. [...] According to The Atlantic, this opt-out process—which requires artists to submit requests for each artwork and could be too cumbersome for many artists to complete—leaves artists stuck with only the option of protecting new works that "they create from here on out." It seems like it's too late to protect any work "already claimed by the machines" in 2023, The Atlantic warned. And this issue clearly affects a lot of people. A spokesperson told The Atlantic that Stability AI alone has fielded "over 160 million opt-out requests in upcoming training." Until federal regulators figure out what rights artists ought to retain as AI technologies rapidly advance, at least one artist—cartoonist and illustrator Sarah Andersen—is advancing a direct copyright infringement claim against Stability AI, maker of Stable Diffusion, another remarkable AI image synthesis tool.

Andersen, whose proposed class action could impact all artists, has about a month to amend her complaint to "plausibly plead that defendants' AI products allow users to create new works by expressly referencing Andersen's works by name," if she wants "the inferences" in her complaint "about how and how much of Andersen's protected content remains in Stable Diffusion or is used by the AI end-products" to "be stronger," a judge recommended. In other words, under current copyright laws, Andersen will likely struggle to win her legal battle if she fails to show the court which specific copyrighted images were used to train AI models and demonstrate that those models used those specific images to spit out art that looks exactly like hers. Citing specific examples will matter, one legal expert told TechCrunch, because arguing that AI tools mimic styles likely won't work—since "style has proven nearly impossible to shield with copyright." Andersen's lawyers told Ars that her case is "complex," but they remain confident that she can win, possibly because, as other experts told The Atlantic, she might be able to show that "generative-AI programs can retain a startling amount of information about an image in their training data—sometimes enough to reproduce it almost perfectly." But she could fail if the court decides that using data to train AI models is fair use of artists' works, a legal question that remains unclear.

AI

You Can Now Generate AI Images Directly in the Google Search Bar (engadget.com) 21

Google announced Thursday that users who have opted-in for its Search Generative Experience (SGE) will be able to create AI images directly from the standard Search bar. From a report: SGE is Google's vision for our web searching future. Rather than picking websites from a returned list, the system will synthesize a (reasonably) coherent response to the user's natural language prompt using the same data that the list's links led to. Thursday's updates are a natural expansion of that experience, simply returning generated images (using the company's Imagen text-to-picture AI) instead of generated text. Users type in a description of what they're looking for (a Capybara cooking breakfast, in Google's example) and, within moments, the engine will create four alternatives to pick from and refine further. Users will also be able to export their generated images to Drive or share them via email.
AI

4chan Uses Bing To Flood the Internet With Racist Images (404media.co) 132

samleecole writes: 4chan users are coordinating a posting campaign where they use Microsoft Bing's AI text-to-image generator to create racist images that they can then post across the internet. The news shows how users are able to manipulate free to access, easy to use AI tools to quickly flood the internet with racist garbage, even when those tools are allegedly strictly moderated. "We're making propaganda for fun. Join us, it's comfy," the 4chan thread instructs. "MAKE, EDIT, SHARE."

A visual guide hosted on Imgur that's linked in that post instructs users to use AI image generators, edit them to add captions that make them seem like political campaigns, and post them to social media sites, specifically Telegram, Twitter, and Instagram. 404 Media has also seen these images shared on a TikTok account that has since been removed. People being racist is not a technological problem. But we should pay attention to the fact that technology is "to borrow a programming concept" 10x'ing racist posters, allowing them to create more sophisticated content more quickly in a way we have not seen online before. Perhaps more importantly, they are doing so with tools that are allegedly "safe" and moderated so strictly, to a point where they will not generate completely harmless images of Julius Caesar. This means we are currently getting the worst of both worlds from Bing, an AI tool that will refuse to generate a nipple but is supercharging 4chan racists.

AI

The Community Pushing AI-Generated Porn To 'the Edge of Knowledge' 112

samleecole shares a report from 404 Media, a new independent media company founded by technology journalists Jason Koebler, Emanuel Maiberg, Samantha Cole, and Joseph Cox: On the Discord server for Mage Space, a popular platform for creating AI-generated images, is a list of channels where members share adult content. There are channels for furries, hardcore and softcore porn, and anime. At the bottom of the list is a channel named "other-nsfw" which includes a few distinct erotic genres that don't fit neatly into any of the others. Mostly, it's gore, violence, and bizarre, in hyperrealistic erotic imagery entirely generated by AI. The images people create, and the long, meandering prompts they write, are a rich text that could offer a glimpse into where sexuality in the internet age is taking us next, and how we're steering it.

There's no shortage of fetish content on the internet, which might make the above statement sound ridiculous and unbelievable. Online, fetishists find their people and set to work making more of what they like, whether it's elaborate role-playing cosplays of themselves as sexy airplanes, blueberries, or slime monsters. Sometimes it pushes the bounds of the sane and legal: crush, fart, and scat porn all thrive online, and snuff films have been popular since before the internet existed. But with the rise in popularity of generative AI, and wildly popular sites like Mage.Space that let users generate anything they set their minds to, the limits are literally our own imaginations. With that power, people are wrangling images out of the AI that are on the edge of what's popular, let alone possible in the porn world. "This conversation we're having is literally on the edge of knowledge, nobody's writing about this in academia right now," Thomas Brooks, assistant professor of psychology at New Mexico Highlands University, told me. "Everybody's still kind of caught up in deepfakes. And they haven't quite grappled with generative AI yet."
"You, as the individual porn consumer, can now create your own special little fantasy and your own technological, disembodied sexuality," said Brooks, in what he refers to as gamified pornography. "There's an internal motivation to solve the puzzle and get the prize. But then there's an external motivation of, 'can I come up with this crazy thing to show my anonymous internet friends.'"

"We're letting technology become mediators of our different psychosocial expressions," Brooks added.
AI

New AP Guidelines Lay the Groundwork For AI-Assisted Newsrooms (engadget.com) 11

An anonymous reader quotes a report from Engadget: The Associated Press published standards today for generative AI use in its newsroom. The organization, which has a licensing agreement with ChatGPT maker OpenAI, listed a fairly restrictive and common-sense list of measures around the burgeoning tech while cautioning its staff not to use AI to make publishable content. Although nothing in the new guidelines is particularly controversial, less scrupulous outlets could view the AP's blessing as a license to use generative AI more excessively or underhandedly.

The organization's AI manifesto underscores a belief that artificial intelligence content should be treated as the flawed tool that it is -- not a replacement for trained writers, editors and reporters exercising their best judgment. "We do not see AI as a replacement of journalists in any way," the AP's Vice President for Standards and Inclusion, Amanda Barrett, wrote in an article about its approach to AI today. "It is the responsibility of AP journalists to be accountable for the accuracy and fairness of the information we share." The article directs its journalists to view AI-generated content as "unvetted source material," to which editorial staff "must apply their editorial judgment and AP's sourcing standards when considering any information for publication." It says employees may "experiment with ChatGPT with caution" but not create publishable content with it. That includes images, too. "In accordance with our standards, we do not alter any elements of our photos, video or audio," it states. "Therefore, we do not allow the use of generative AI to add or subtract any elements." However, it carved an exception for stories where AI illustrations or art are a story's subject -- and even then, it has to be clearly labeled as such.

Barrett warns about AI's potential for spreading misinformation. To prevent the accidental publishing of anything AI-created that appears authentic, she says AP journalists "should exercise the same caution and skepticism they would normally, including trying to identify the source of the original content, doing a reverse image search to help verify an image's origin, and checking for reports with similar content from trusted media." To protect privacy, the guidelines also prohibit writers from entering "confidential or sensitive information into AI tools." Although that's a relatively common-sense and uncontroversial set of rules, other media outlets have been less discerning. [...] It's not hard to imagine other outlets -- desperate for an edge in the highly competitive media landscape -- viewing the AP's (tightly restricted) AI use as a green light to make robot journalism a central figure in their newsrooms, publishing poorly edited / inaccurate content or failing to label AI-generated work as such.
Further reading: NYT Prohibits Using Its Content To Train AI Models
Space

Planetary Defense Test Deflected An Asteroid But Unleashed a Boulder Swarm (ucla.edu) 63

A UCLA-led study of NASA's DART mission found that the collision launched a cloud of boulders from its surface. "The boulder swarm is like a cloud of shrapnel expanding from a hand grenade," said Jewitt, lead author of the study and a UCLA professor of earth and planetary sciences. "Because those big boulders basically share the speed of the targeted asteroid, they're capable of doing their own damage." From a news release: In September 2022, NASA deliberately slammed a spacecraft into the asteroid Dimorphos to knock it slightly off course. NASA's objective was to evaluate whether the strategy could be used to protect Earth in the event that an asteroid was headed toward our planet. Jewitt said that given the high speed of a typical impact, a 15-foot boulder hitting Earth would deliver as much energy as the atomic bomb that was dropped on Hiroshima. Fortunately, neither Dimorphos nor the boulder swarm have ever posed any danger to Earth. NASA chose Dimorphos because it was about 6 million miles from Earth and measured just 581 feet across -- close enough to be of interest and small enough, engineers reasoned, that the half-ton Double Asteroid Redirection Test, or DART, planetary defense spacecraft would be able to change the asteroid's trajectory.

When it hurtled into Dimorphos at 13,000 miles per hour, DART slowed Dimorphos' orbit around its twin asteroid, Didymos, by a few millimeters per second. But, according to images taken by NASA's Hubble Space Telescope, the collision also shook off 37 boulders, each measuring from 3 to 22 feet across. None of the boulders is on a course to hit Earth, but if rubble from a future asteroid deflection were to reach our planet, Jewitt said, they'd hit at the same speed the asteroid was traveling -- fast enough to cause tremendous damage. The research, published in the Astrophysical Journal Letters, found that the rocks were likely knocked off the surface by the shock of the impact. A close-up photograph taken by DART just two seconds before the collision shows a similar number of boulders sitting on the asteroid's surface -- and of similar sizes and shapes -- to the ones that were imaged by the Hubble telescope. The boulders that the scientists studied, among the faintest objects ever seen within the solar system, are observable in detail thanks to the powerful Hubble telescope.

Cellphones

Toronto Zoo Urges Visitors To Stop Showing Cellphone Videos To Gorillas (thestar.com) 62

An anonymous reader quotes a report from The Toronto Star: Nassir the gorilla, languid in the heat of a summer afternoon, sits just within reach of a faded sign taped to the glass of his enclosure at the Toronto Zoo, advising visitors not to share images on their cellphones with the swinging bachelor. "We've had a lot of members and guests that actually will put their phones up to the glass and show him videos," says Maria Franke, the zoo's director of wildlife conservation and welfare. "And Nassir is so into those videos. It was causing him to be distracted and not interacting with the other gorillas, and you know, being a gorilla. He was just so enthralled with gadgets and phones and the videos."

Gorillas, it seems, share more than just 98 per cent of our DNA. Zookeepers have discovered they can become every bit as interested in cellphones as the bipedal visitors who pay to see them. [...] Biologist Rob Laidlaw sees animal interest in technology as a manifestation of their need for stimulation -- a result of the boredom they experience in captivity. He says keeping such animals stimulated is a huge challenge, even for sanctuary organizations that provide sprawling enclosures. "They're looking for any opportunity they can find to engage intellectually," said Laidlaw, a chartered biologist and executive director of Zoocheck, an animal protection organization. Laidlaw says technology has its uses in zoos, but the emphasis needs to remain on providing as many animals as possible with environments that are as close to their native habitats as possible. "My fear is always that people see these things and think they're a panacea when in fact they're not. They're just one little tiny facet of relieving the boredom of animals."

As most teenagers do, Nassir seems to have grown out of his preoccupation with cellphones, says Franke. He is strongly bonded to his half-brother, Sadiki, who shares the zoo's rainforest habitat with him. "It's like Nassir was a little boy, all he wanted to to do was sit in the basement and play games on the computer," said Franke. "I'm not really sure what the content of the videos was. Was it gorillas in the wild? I have no idea. Was it a cartoon? I have no idea. But obviously, there was something that was attracting him to it." But just in case he isn't quite over it, the note to the public remains up -- for now.

Encryption

Security Researchers Latest To Blast UK's Online Safety Bill As Encryption Risk (techcrunch.com) 5

An anonymous reader quotes a report from TechCrunch: Nearly 70 IT security and privacy academics have added to the clamor of alarm over the damage the U.K.'s Online Safety Bill could wreak to, er, online safety unless it's amended to ensure it does not undermine strong encryption. Writing in an open letter (PDF), 68 U.K.-affiliated security and privacy researchers have warned the draft legislation poses a stark risk to essential security technologies that are routinely used to keep digital communications safe.

"As independent information security and cryptography researchers, we build technologies that keep people safe online. It is in this capacity that we see the need to stress that the safety provided by these essential technologies is now under threat in the Online Safety Bill," the academics warn, echoing concerns already expressed by end-to-end encrypted comms services such as WhatsApp, Signal and Element -- which have said they would opt to withdraw services from the market or be blocked by U.K. authorities rather than compromise the level of security provided to their users. [...] "We understand that this is a critical time for the Online Safety Bill, as it is being discussed in the House of Lords before being returned to the Commons this summer," they write. "In brief, our concern is that surveillance technologies are deployed in the spirit of providing online safety. This act undermines privacy guarantees and, indeed, safety online."

The academics, who hold professorships and other positions at universities around the country -- including a number of Russell Group research-intensive institutions such as King's College and Imperial College in London, Oxford and Cambridge, Edinburgh, Sheffield and Manchester to name a few -- say their aim with the letter is to highlight "alarming misunderstandings and misconceptions around the Online Safety Bill and its interaction with the privacy and security technologies that our daily online interactions and communication rely on."
"There is no technological solution to the contradiction inherent in both keeping information confidential from third parties and sharing that same information with third parties," the experts warn, adding: "The history of 'no one but us' cryptographic backdoors is a history of failures, from the Clipper chip to DualEC. All technological solutions being put forward share that they give a third party access to private speech, messages and images under some criteria defined by that third party."

Last week, Apple publicly voiced its opposition to the bill. The company said in a statement: "End-to-end encryption is a critical capability that protects the privacy of journalists, human rights activists, and diplomats. It also helps everyday citizens defend themselves from surveillance, identity theft, fraud, and data breaches. The Online Safety Bill poses a serious threat to this protection, and could put UK citizens at greater risk. Apple urges the government to amend the bill to protect strong end-to-end encryption for the benefit of all."
IOS

Apple Announces iOS 17 With StandBy Charging Mode, Better Autocorrect (theverge.com) 44

At WWDC today, Apple debuted iOS 17. "Highlights include new safety features, a built-in journaling app, a new nightstand mode, redesigned contact cards, better auto-correct and voice transcription, and live voicemail," reports The Verge. "And you'll be able to drop the 'hey' from 'Hey Siri.'" From the report: Your contact book is getting an update with a new feature called posters, which turns contact cards into flashy marquee-like images that show up full-screen on your recipient's iPhone when you call them. They use a similar design language as the redesigned lock screens, with bold typography options and the ability to add Memoji, and will work with third-party VoIP apps. There's also a new live transcription feature for voicemail that lets you view a transcript of the message a caller is leaving in real time. You can choose to ride it out or pick up the call, and it's all handled on-device. You'll also be able to leave a message on FaceTime, too.

Some updates to messages include the ability to filter searches with additional terms, a feature that jumps to the most recent message so you can catch up more easily, transcriptions of voice messages -- similar to what the Pixel 7 series introduced -- and a series of new features called Check In that shares your live location and status with someone else. It can automatically send a message to a friend when you've arrived home, and it can share your phone's battery and cell service status to help avoid confusion if you're in a dead zone. Stickers are getting an overhaul, too, with the ability to add any emoji or photo cutout as a "sticker" positioned on iMessages or anywhere within the system. Live photos can be turned into animated stickers, too, and you can now add effects to stickers.

AirDrop gets an update to send contact information -- cleverly called NameDrop -- which will send your selected email addresses and phone numbers (and your poster) just by bringing two iPhones near each other. It also works between an iPhone and an Apple Watch. Photos can be shared the same way, and if the file is a big one, it's now possible to move out of range while continuing the download. iOS 17 also includes keyboard updates, including enhancements to autocorrect. It now relies on a new language model for better accuracy, plus an easier shortcut to revert to the original word you wrote if necessary. There's now in-line predictive typing and sentence-level autocorrections to correct more grammatical mistakes. It'll finally learn your favorite cuss words, too; Apple's Craig Federighi even made a "ducking" joke onstage. Dictation uses a new AI model, too, that's more accurate.

A new app called Journal automatically suggests moments that you might want to commemorate in a journal entry. Your entries can include photos, music, and activities, and you can schedule reminders for yourself to start writing. It's end-to-end encrypted, too, to keep things private. StandBy is a new mode for charging that turns the screen into a status display with the date and time. It can show information from Live Activities, widgets, and smart stacks and automatically turns on when your phone is in landscape mode while charging. You can swipe to the right to see some of your highlighted photos, and it comes with customizable clockfaces. Siri will surface visual results in StandBy, and the display shifts to a red tone at night to avoid disrupting sleep. Last but not least, Siri gets a boost, too, and finally lets you drop the "hey" from "Hey Siri." It will also recognize back-to-back commands.
iOS 17 is available to developers today, with a public beta released next month.
Youtube

YouTube is Killing Stories 37

YouTube is getting rid of Stories, a feature for temporary posts, beginning in June. Users won't be able to post Stories starting June 26th, and existing posts will expire after seven days. From a report: Stories were first introduced in 2017 under the name Reels and were available to users with over 10,000 subscribers. Similar to Instagram (which in turn lifted the concept from Snapchat), YouTube Stories disappeared after a set amount of time; creators could use Stories to post updates or behind-the-scenes content to promote their channel. But looking around today, it doesn't seem to have caught on -- access was limited, few creators seem to be regularly posting Stories, and the feature doesn't get much promotion even from YouTube. In the absence of Stories, YouTube wants creators to instead post content to other surfaces on the platform: Community Posts and Shorts. The company recently expanded access to Community Posts, a text-based updates feature, and added the ability to have posts expire after a certain period. Creators can also share polls, quizzes, images, and videos as Community Posts, which appear in a tab on channels.
Google

Google Shared AI Knowledge With the World - Until ChatGPT Caught Up (washingtonpost.com) 33

For years Google published scientific research that helped jump-start its competitors. But now it's lurched into defensive mode. From a report: In February, Jeff Dean, Google's longtime head of artificial intelligence, announced a stunning policy shift to his staff: They had to hold off sharing their work with the outside world. For years Dean had run his department like a university, encouraging researchers to publish academic papers prolifically; they pushed out nearly 500 studies since 2019, according to Google Research's website. But the launch of OpenAI's groundbreaking ChatGPT three months earlier had changed things. The San Francisco start-up kept up with Google by reading the team's scientific papers, Dean said at the quarterly meeting for the company's research division. Indeed, transformers -- a foundational part of the latest AI tech and the T in ChatGPT -- originated in a Google study.

Things had to change. Google would take advantage of its own AI discoveries, sharing papers only after the lab work had been turned into products, Dean said, according to two people with knowledge of the meeting, who spoke on the condition of anonymity to share private information. The policy change is part of a larger shift inside Google. Long considered the leader in AI, the tech giant has lurched into defensive mode -- first to fend off a fleet of nimble AI competitors, and now to protect its core search business, stock price, and, potentially, its future, which executives have said is intertwined with AI. In op-eds, podcasts and TV appearances, Google CEO Sundar Pichai has urged caution on AI. "On a societal scale, it can cause a lot of harm," he warned on "60 Minutes" in April, describing how the technology could supercharge the creation of fake images and videos. But in recent months, Google has overhauled its AI operations with the goal of launching products quickly, according to interviews with 11 current and former Google employees, most of whom spoke on the condition of anonymity to share private information.

News

Substack Launches Notes (theverge.com) 21

Substack's Twitter-like feature for shorter posts, called Notes, is launching for everyone on Tuesday. The Verge reports: Substack's Notes will appear in their own separate tab, meaning they'll be separate from the full newsletters you can read in the Inbox tab or the threads you can read in the Chat tab, where you can read newsletters. In a blog post, Substack suggests using Notes to share things like "posts, quotes, comments, images, and links," and there is no character limit, Substack spokesperson Helen Tobin tells The Verge.

Each post can include up to six photos or GIFs, but video isn't supported. Notes you share won't go to subscriber inboxes; they'll just live on the Substack website and app. And you can interact with other Notes with like, reply, and "restack" (retweet) buttons. Within the Notes tab, you can look through two different feeds: "Home" and "Subscribed." "Home" shows notes from writers you subscribe to and "writers they recommend," meaning you'll see notes from people you may not already be familiar with. "Subscribed" only shows notes from people you subscribe to.

AI

Slashdot Asks: How Are You Using ChatGPT? 192

OpenAI's ChatGPT has taken the world by storm with its ability to give solutions to complex problems almost instantly and with nothing more than a text prompt. Up until yesterday, ChatGPT was based on GPT-3.5, a deep learning language model that was trained on an impressive 175 billion parameters. Now, it's based on GPT-4 (available for ChatGPT+ subscribers), capable of solving even more complex problems with greater accuracy (40% percent more likely to give factual responses). It's also capable of receiving images as a basis for interaction, instead of just text. While the company has chosen not to reveal how large GPT-4 is, they claim it scored in the 88th percentile on a number of tests, including the Uniform Bar Exam, LSAT, SAT Math and SAT Evidence-Based Reading & Writing exams.

ChatGPT is extremely capable but its responses largely depend on the questions or prompts you enter. In other words, the better you describe and phrase the problem/question, the better the results. We're already starting to see companies require that new hires know not only how to use ChatGPT but how to extract the most out of it.

That being said, we'd like to know how Slashdotters are using the chatbot. What are some of your favorite prompts? Have you used it to become more efficient at work? What about for coding? Please share specific prompts too to help us get similar results.
Portables (Apple)

The Galaxy Book3 Ultra Is Samsung's Shot At the MacBook Pro (theverge.com) 112

At the Samsung Galaxy Unpacked 2023 event today, Samsung announced the Galaxy Book3 Ultra, a 16-inch workstation laptop with a 120Hz OLED screen, an H-Series Core i7 or Core i9, and an RTX 4050 or 4070 GPU. "Samsung makes a number of Galaxy Book models, but this is the first one of the past few years that has really targeted the deep-pocketed professional user -- that is, the core audience for Apple's high-powered and wildly expensive MacBook Pro 16," reports The Verge. "It'll start at $2,399.99 ($100 cheaper than the base MacBook Pro 16), with a release date still to be announced." From the report: Like its siblings in the Galaxy Book3 line, a big draw of this workstation will be its screen. It's got a 2880 x 1800 120Hz 16:10 OLED display (a welcome change from the 16:9 panels that adorned last year's Galaxy Book2) rated for 400 nits of brightness [...]. Elsewhere, using the device felt pretty similar to using any number of other Samsung Galaxy Books, with a satisfyingly clicky keyboard, a smooth finish, a high-quality build, and a compact chassis. The Ultra is 0.65 inches thick and 3.9 pounds, which is slightly thinner and close to a pound lighter than the 16-inch MacBook Pro that Apple just released [...].

I was able to use a number of Samsung's continuity features, including Second Screen (which allows you to easily use a Galaxy Tab as a second monitor) and Quick Share (which allows you to quickly transfer images and other files between Samsung devices). For Samsung enthusiasts, those seem like handy features that aren't too much of a hassle to set up. The one feature I had issues with was the touchpad -- it registered some of my two-finger clicks as one-finger clicks and wasn't quite picking up all of my scrolls. The units in Samsung's demo area were preproduction devices, so I hope this is a kink Samsung can iron out before the final release.

Unfortunately, we don't yet know how it will stack up when it comes to battery life. The M2 generation of MacBooks is very strong on that front -- and given that the Galaxy Book3 Ultra is running a high-resolution screen, a power-hungry H-series processor, and a very power-hungry RTX GPU, I'm a little bit nervous about that. If Samsung can pull off a device that lasts nearly as long as Apple's do, given those factors, hats off to them.
Further reading:
The Samsung Galaxy S23 Ultra Is a Minor Update To a Spec Monster
Samsung, Google and Qualcomm Team Up To Build a New Mixed-Reality Platform
Privacy

Roomba Testers Feel Misled After Intimate Images Ended Up on Facebook (technologyreview.com) 76

An investigation recently revealed how images of a minor and a tester on the toilet ended up on social media. iRobot said it had consent to collect this kind of data from inside homes -- but participants say otherwise. From a report: When Greg unboxed a new Roomba robot vacuum cleaner in December 2019, he thought he knew what he was getting into. He would allow the preproduction test version of iRobot's Roomba J series device to roam around his house, let it collect all sorts of data to help improve its artificial intelligence, and provide feedback to iRobot about his user experience. He had done this all before. Outside of his day job as an engineer at a software company, Greg had been beta-testing products for the past decade. He estimates that he's tested over 50 products in that time -- everything from sneakers to smart home cameras.

But what Greg didn't know -- and does not believe he consented to -- was that iRobot would share test users' data in a sprawling, global data supply chain, where everything (and every person) captured by the devices' front-facing cameras could be seen, and perhaps annotated, by low-paid contractors outside the United States who could screenshot and share images at their will. Greg, who asked that we identify him only by his first name because he signed a nondisclosure agreement with iRobot, is not the only test user who feels dismayed and betrayed. Nearly a dozen people who participated in iRobot's data collection efforts between 2019 and 2022 have come forward in the weeks since MIT Technology Review published an investigation into how the company uses images captured from inside real homes to train its artificial intelligence. The participants have shared similar concerns about how iRobot handled their data -- and whether those practices conform with the company's own data protection promises. After all, the agreements go both ways, and whether or not the company legally violated its promises, the participants feel misled.

Crime

UK Govt: Netflix Password Sharing Is Illegal and Potentially Criminal Fraud (torrentfreak.com) 70

An anonymous reader quotes a report from TorrentFreak: The UK Government's Intellectual Property Office published new piracy guidance today, and it contains a small, easily missed detail. People who share their Netflix, Amazon Prime, or Disney+ passwords are violators of copyright law. And it gets worse. The IPO informs TorrentFreak that password sharing could also mean criminal liability for fraud. [...] In a low-key announcement today, the UK Government's Intellectual Property Office announced a new campaign in partnership with Meta, aiming to help people avoid piracy and counterfeit goods online. Other than in the headline, there is zero mention of Meta in the accompanying advice, and almost no advice that hasn't been issued before. But then this appears: "Piracy is a major issue for the entertainment and creative industries. Pasting internet images into your social media, password sharing on streaming services and accessing the latest films, tv series or live sports events through kodi boxes, fire sticks or Apps without paying a subscription all break copyright laws. Not only are you breaking the law but stopping someone earning a living from their hard work."

TorrentFreak immediately contacted the Intellectual Property Office for clarification on the legal side, particularly since password sharing sits under a piracy heading. The IPO's response was uncompromising, to put it mildly. "There are a range of provisions in criminal and civil law which may be applicable in the case of password sharing where the intent is to allow a user to access copyright protected works without payment," the IPO informs TorrentFreak. "These provisions may include breach of contractual terms, fraud or secondary copyright infringement depending on the circumstances." Given that using the "services of a members' club without paying and without being a member" is cited as an example of fraud in the UK, the bar for criminality is set very low, unless the Crown Prosecution Service decides otherwise, of course.

Nintendo

Nintendo Goes After Fan-Made Custom Steam 'Icons' With DMCA Takedowns (arstechnica.com) 41

An anonymous reader quotes a report from Ars Technica: Nintendo has issued a number of Digital Millennium Copyright Act (DMCA) requests against SteamGridDB (SGDB), a site that hosts custom fan-made icons and images used to represent games on Steam's front-end interface. Since 2015, SGDB's collection has grown to include hundreds of thousands of images representing tens of thousands of titles. That includes custom imagery for many standard Steam games and emulated game ROMs, which can be added to Steam as "external games."

To be clear, SteamGridDB doesn't host the kind of ROM files that have gotten other sites in legal trouble with Nintendo, or even the emulators used to run those games. "We don't support piracy in any way," an SGDB admin (who asked to remain anonymous) told Ars. "The website is just a free repository where people can share options to customize their game launchers." But in a series of DMCA requests viewed by Ars Technica, dated October 27, Nintendo says some of the imagery on SGDB "displays Nintendo's trademarks and other intellectual property (including characters) which is likely to lead to consumer confusion." Thus, dozens of SGDB images have been replaced with a blank image featuring the text "this asset has been removed in response to a DMCA takedown request" (you can see some of the specific images that were removed in this Internet Archive snapshot from April and compare it to how the listing currently looks).

Thus far, Nintendo's DMCA requests focus on imagery for just five Switch games that are listed on SGDB: Pokemon Scarlet & Violet, Splatoon 3, Super Mario Odyssey, The Legend of Zelda: Breath of the Wild, and Xenoblade Chronicles 3. Other Switch games listed on the site (some featuring the same exact characters) are unaffected, as are images for many older Nintendo titles. [...] Even for the Switch games in question, the DMCA requests focused on images that "straight up used sprites and assets from [Nintendo's] IP," according to the SGDB admin. Nintendo's requests so far seem to have ignored "completely original creations" and "pure fan art" even when that art involves drawings of Nintendo's original characters. It's unclear if those kinds of images would fall under a different legal standard in this case. "If an IP holder asks to take down original creations then I'll figure out the best way to handle that when it happens," the admin said. "The site is basically all just fan art, we're open to publishers reaching out and discussing any issues they may have. [The] best way to find a good course of action is to discuss options."

Slashdot Top Deals