×
Security

Mistakenly Published Password Exposes Mercedes-Benz Source Code (techcrunch.com) 29

An anonymous reader quotes a report from TechCrunch: Mercedes-Benz accidentally exposed a trove of internal data after leaving a private key online that gave "unrestricted access" to the company's source code, according to the security research firm that discovered it. Shubham Mittal, co-founder and chief technology officer of RedHunt Labs, alerted TechCrunch to the exposure and asked for help in disclosing to the car maker. The London-based cybersecurity company said it discovered a Mercedes employee's authentication token in a public GitHub repository during a routine internet scan in January. According to Mittal, this token -- an alternative to using a password for authenticating to GitHub -- could grant anyone full access to Mercedes's GitHub Enterprise Server, thus allowing the download of the company's private source code repositories.

"The GitHub token gave 'unrestricted' and 'unmonitored' access to the entire source code hosted at the internal GitHub Enterprise Server," Mittal explained in a report shared by TechCrunch. "The repositories include a large amount of intellectual property connection strings, cloud access keys, blueprints, design documents, [single sign-on] passwords, API Keys, and other critical internal information." Mittal provided TechCrunch with evidence that the exposed repositories contained Microsoft Azure and Amazon Web Services (AWS) keys, a Postgres database, and Mercedes source code. It's not known if any customer data was contained within the repositories. It's not known if anyone else besides Mittal discovered the exposed key, which was published in late-September 2023.
A Mercedes spokesperson confirmed that the company "revoked the respective API token and removed the public repository immediately."

"We can confirm that internal source code was published on a public GitHub repository by human error. The security of our organization, products, and services is one of our top priorities. We will continue to analyze this case according to our normal processes. Depending on this, we implement remedial measures."

Submission + - Mistakenly Published Password Exposes Mercedes-Benz Source Code (techcrunch.com)

An anonymous reader writes: Mercedes-Benz accidentally exposed a trove of internal data after leaving a private key online that gave “unrestricted access” to the company’s source code, according to the security research firm that discovered it. Shubham Mittal, co-founder and chief technology officer of RedHunt Labs, alerted TechCrunch to the exposure and asked for help in disclosing to the car maker. The London-based cybersecurity company said it discovered a Mercedes employee’s authentication token in a public GitHub repository during a routine internet scan in January. According to Mittal, this token — an alternative to using a password for authenticating to GitHub — could grant anyone full access to Mercedes’s GitHub Enterprise Server, thus allowing the download of the company’s private source code repositories.

“The GitHub token gave ‘unrestricted’ and ‘unmonitored’ access to the entire source code hosted at the internal GitHub Enterprise Server,” Mittal explained in a report shared by TechCrunch. “The repositories include a large amount of intellectual property connection strings, cloud access keys, blueprints, design documents, [single sign-on] passwords, API Keys, and other critical internal information.” Mittal provided TechCrunch with evidence that the exposed repositories contained Microsoft Azure and Amazon Web Services (AWS) keys, a Postgres database, and Mercedes source code. It’s not known if any customer data was contained within the repositories. It’s not known if anyone else besides Mittal discovered the exposed key, which was published in late-September 2023.

Crime

IT Consultant Fined For Daring To Expose Shoddy Security (theregister.com) 102

Thomas Claburn reports via The Register: A security researcher in Germany has been fined $3,300 for finding and reporting an e-commerce database vulnerability that was exposing almost 700,000 customer records. Back in June 2021, according to our pals at Heise, an contractor identified elsewhere as Hendrik H. was troubleshooting software for a customer of IT services firm Modern Solution GmbH. He discovered that the Modern Solution code made an MySQL connection to a MariaDB database server operated by the vendor. It turned out the password to access that remote server was stored in plain text in the program file MSConnect.exe, and opening it in a simple text editor would reveal the unencrypted hardcoded credential.

With that easy-to-find password in hand, anyone could log into the remote server and access data belonging to not just that one customer of Modern Solution, but data belonging to all of the vendor's clients stored on that database server. That info is said to have included personal details of those customers' own customers. And we're told that Modern Solution's program files were available for free from the web, so truly anyone could inspect the executables in a text editor for plain-text hardcoded database passwords. The contractor's findings were discussed in a June 23, 2021 report by Mark Steier, who writes about e-commerce. That same day Modern Solution issued a statement [PDF] -- translated from German -- summarizing the incident [...]. The statement indicates that sensitive data about Modern Solution customers was exposed: last names, first names, email addresses, telephone numbers, bank details, passwords, and conversation and call histories. But it claims that only a limited amount of data -- names and addresses -- about shoppers who made purchases from these retail clients was exposed. Steier contends that's incorrect and alleged that Modern Solution downplayed the seriousness of the exposed data, which he said included extensive customer data from the online stores operated by Modern Solution's clients.

In September 2021 police in Germany seized the IT consultant's computers following a complaint from Modern Solution that claimed he could only have obtained the password through insider knowledge â" he worked previously for a related firm -- and the biz claimed he was a competitor. Hendrik H. was charged with unlawful data access under Section 202a of Germany's Criminal Code, based on the rule that examining data protected by a password can be classified as a crime under the Euro nation's cybersecurity law. In June, 2023, a Julich District Court in western Germany sided with the IT consultant because the Modern Solution software was insufficiently protected. But the Aachen regional court directed the district court to hear the complaint. Now, the district court has reversed its initial decision. On January 17, a Julich District Court fined Hendrik H. and directed him to pay court costs.

IT

Google Maps Can Now Navigate Inside Tunnels (theverge.com) 38

Google Maps is about to get better at showing directions inside tunnels. A new feature spotted by SmartDroid allows the Android version of the app to use Bluetooth beacons to track your location in areas where GPS signals typically can't reach. The Verge: These beacons transmit Bluetooth signals that give location data to your phone, according to the Google-owned Waze, which already supports the feature. The app then uses this information along with the device's mobile connectivity to "provide real-time traffic data as it would with a typical GPS connection."
United States

US Tech Innovation Dreams Soured By Changed R&D Tax Laws (theregister.com) 35

Brandon Vigliarolo reports via The Register: A US federal tax change that took effect in 2022 thanks to a time-triggered portion of the Trump-era Tax Cuts and Jobs Act may leave entrepreneurs with massive tax bills. Section 174 of the US tax code -- prior to the passage of the 2017 TCJA -- allowed companies to handle the tax bill of their specified research or experimental (SRE) budgets in one of two ways: Either capitalized and amortized over the course of five years, or written off annually. Of the many things covered by SRE, most crucially for our purposes is "any amount paid or incurred in connection with the development of any software," which includes developer salaries.

The TCJA included a post-dated change to Section 174 that took effect on January 1, 2022 that would no longer allow companies to automatically expense any SRE costs on an annual basis. Going forward they'd all have to be amortized over five years -- a potential budgetary disaster for companies that haven't been doing so in the past. As pointed out by Gergely Orosz of The Pragmatic Engineer, a theoretical company with $1m in revenue and $1m of software developer salary costs could have claimed it had no taxable profit in 2021. The required SRE amortization rate of 10 percent would mean the org had $900k in profit in 2022 -- and a six-figure tax bill coming due the following year. This isn't theoretical -- Orosz said that he recently spoke to several engineers and entrepreneurs who've been surprised with massive tax bills that have led to layoffs, reduced hiring, and left some companies in financial distress.

House of Representatives member Ron Estes (R-KS), who last year sponsored a bill to restore Section 174 to its pre-TCJA option to expense or amortize, likewise said an a late-2023 op-ed that the changes have led to R&D at US companies -- not just in the tech sector -- shrinking considerably. "Since amortization took effect, the growth rate of R&D spending has slowed dramatically from 6.6 percent on average over the previous five years to less than one-half of 1 percent over the last 12 months," Estes said. "The [R&D] sector is down by more than 14,000 jobs." [...] That, and the Section 174 changes make the US far less enticing as a place to open a business or do R&D, and the only one with such forced amortization in the world.
Not much is being done to fix the TCJA problem with Section 174. The Estes bill, along with a related bill introduced in the Senate in March 2023, have not undergone a committee hearing since their introduction. The White House hasn't mentioned anything about Section 174.

Meanwhile, the IRS released a notice (PDF) reminding tax payers about Section 174's changes.
The Courts

eBay To Pay $3 Million Penalty For Employees Sending Live Cockroaches, Fetal Pig To Bloggers (cbsnews.com) 43

E-commerce giant eBay agreed to pay a $3 million penalty for the harassment and stalking of a Massachusetts couple by several of its employees. "The couple, Ina and David Steiner, had been subjected to threats and bizarre deliveries, including live spiders, cockroaches, a funeral wreath and a bloody pig mask in August 2019," reports CBS News. From the report: Thursday's fine comes after several eBay employees ran a harassment and intimidation campaign against the Steiners, who publish a news website focusing on players in the e-commerce industry. "eBay engaged in absolutely horrific, criminal conduct. The company's employees and contractors involved in this campaign put the victims through pure hell, in a petrifying campaign aimed at silencing their reporting and protecting the eBay brand," Levy said. "We left no stone unturned in our mission to hold accountable every individual who turned the victims' world upside-down through a never-ending nightmare of menacing and criminal acts."

The Justice Department criminally charged eBay with two counts of stalking through interstate travel, two counts of stalking through electronic communications services, one count of witness tampering and one count of obstruction of justice. The company agreed to pay $3 million as part of a deferred prosecution agreement. Under the agreement, eBay will be required to retain an independent corporate compliance monitor for three years, officials said, to "ensure that eBay's senior leadership sets a tone that makes compliance with the law paramount, implements safeguards to prevent future criminal activity, and makes clear to every eBay employee that the idea of terrorizing innocent people and obstructing investigations will not be tolerated," Levy said.

Former U.S. Attorney Andrew Lelling said the plan to target the Steiners, which he described as a "campaign of terror," was hatched in April 2019 at eBay. Devin Wenig, eBay's CEO at the time, shared a link to a post Ina Steiner had written about his annual pay. The company's chief communications officer, Steve Wymer, responded: "We are going to crush this lady." About a month later, Wenig texted: "Take her down." Prosecutors said Wymer later texted eBay security director Jim Baugh. "I want to see ashes. As long as it takes. Whatever it takes," Wymer wrote. Investigators said Baugh set up a meeting with security staff and dispatched a team to Boston, about 20 miles from where the Steiners live. "Senior executives at eBay were frustrated with the newsletter's tone and content, and with the comments posted beneath the newsletter's articles," the Department of Justice wrote in its Thursday announcement.
Two former eBay security executives were sentenced to prison over the incident.
Medicine

New 'MindEar' App Can Reduce Debilitating Impact of Tinnitus, Say Researchers 50

Researchers have designed an app to reduce the impact of tinnitus, an often debilitating condition that manifests via a ringing sound or perpetual buzzing. The Guardian reports: While there is no cure, there are a number of ways of managing the condition, including cognitive behavioural therapy (CBT). This helps people to reduce their emotional connection to the sound, allowing the brain to learn to tune it out. However, CBT can be expensive and difficult for people to access. Researchers have created an app, called MindEar, that provides CBT through a chatbot with other approaches such as sound therapy. "What we want to do is empower people to regain control," said Dr Fabrice Bardy, the first author of the study from the University of Auckland -- who has tinnitus.

Writing in the journal Frontiers in Audiology and Otology, Bardy and colleagues report how 28 people completed the study, 14 of whom were asked to use the app's virtual coach for 10 minutes a day for eight weeks. The other 14 participants were given similar instructions with four half-hour video calls with a clinical psychologist. The participants completed online questionnaires before the study and after the eight-week period. The results reveal six participants given the app alone, and nine who were also given video calls, showed a clinically significant decrease in the distress caused by tinnitus, with the extent of the benefit similar for both groups. After a further eight weeks, a total of nine participants in both groups reported such improvements.
China

AirDrop 'Cracked' By Chinese Authorities To Identify Senders (macrumors.com) 25

According to Bloomberg, Apple's AirDrop feature has been cracked by a Chinese state-backed institution to identify senders who share "undesirable content". MacRumors reports: AirDrop is Apple's ad-hoc service that lets users discover nearby Macs and iOS devices and securely transfer files between them over Wi-Fi and Bluetooth. Users can send and receive photos, videos, documents, contacts, passwords and anything else that can be transferred from a Share Sheet. Apple advertises the protocol as secure because the wireless connection uses Transport Layer Security (TLS) encryption, but the Beijing Municipal Bureau of Justice (BMBJ) says it has devised a way to bypass the protocol's encryption and reveal identifying information.

According to the BMBJ's website, iPhone device logs were analyzed to create a "rainbow table" which allowed investigators to convert hidden hash values into the original text and correlate the phone numbers and email accounts of AirDrop content senders. The "technological breakthrough" has successfully helped the public security authorities identify a number of criminal suspects, who use the AirDrop function to spread illegal content, the BMBJ added. "It improves the efficiency and accuracy of case-solving and prevents the spread of inappropriate remarks as well as potential bad influences," the bureau added.

It is not known if the security flaw in the AirDrop protocol has been exploited by a government agency before now, but it is not the first time a flaw has been discovered. In April 2021, German researchers found that the mutual authentication mechanism that confirms both the receiver and sender are on each other's address book could be used to expose private information. According to the researchers, Apple was informed of the flaw in May of 2019, but did not fix it.

Science

Scientists Discover 100 To 1000 Times More Plastics In Bottled Water (washingtonpost.com) 204

An anonymous reader quotes a report from the Washington Post: People are swallowing hundreds of thousands of microscopic pieces of plastic each time they drink a liter of bottled water, scientists have shown -- a revelation that could have profound implications for human health. A new paper released Monday in the Proceedings of the National Academy of Sciences found about 240,000 particles in the average liter of bottled water, most of which were "nanoplastics" -- particles measuring less than one micrometer (less than one-seventieth the width of a human hair). [...]

The typical methods for finding microplastics can't be easily applied to finding even smaller particles, but Min co-invented a method that involves aiming two lasers at a sample and observing the resonance of different molecules. Using machine learning, the group was able to identify seven types of plastic molecules in a sample of three types of bottled water. [...] The new study found pieces of PET (polyethylene terephthalate), which is what most plastic water bottles are made of, and polyamide, a type of plastic that is present in water filters. The researchers hypothesized that this means plastic is getting into the water both from the bottle and from the filtration process.

Researchers don't yet know how dangerous tiny plastics are for human health. In a large review published in 2019, the World Health Organization said there wasn't enough firm evidence linking microplastics in water to human health, but described an urgent need for further research. In theory, nanoplastics are small enough to make it into a person's blood, liver and brain. And nanoplastics are likely to appear in much larger quantities than microplastics -- in the new research, 90 percent of the plastic particles found in the sample were nanoplastics, and only 10 percent were larger microplastics. Finding a connection between microplastics and health problems in humans is complicated -- there are thousands of types of plastics, and over 10,000 chemicals used to manufacture them. But at a certain point, [...] policymakers and the public need to prepare for the possibility that the tiny plastics in the air we breathe, the water we drink and the clothes we wear have serious and dangerous effects.
"You still have a lot of people that, because of marketing, are convinced that bottled water is better," said Sherri Mason, a professor and director of sustainability at Penn State Behrend in Erie. "But this is what you're drinking in addition to that H2O."

Submission + - Scientists Discover 100 To 1000 Times More Plastics In Bottled Water (washingtonpost.com)

An anonymous reader writes: People are swallowing hundreds of thousands of microscopic pieces of plastic each time they drink a liter of bottled water, scientists have shown — a revelation that could have profound implications for human health. A new paper released Monday in the Proceedings of the National Academy of Sciences found about 240,000 particles in the average liter of bottled water, most of which were “nanoplastics” — particles measuring less than one micrometer (less than one-seventieth the width of a human hair). [...]

The typical methods for finding microplastics can’t be easily applied to finding even smaller particles, but Min co-invented a method that involves aiming two lasers at a sample and observing the resonance of different molecules. Using machine learning, the group was able to identify seven types of plastic molecules in a sample of three types of bottled water. [...] The new study found pieces of PET (polyethylene terephthalate), which is what most plastic water bottles are made of, and polyamide, a type of plastic that is present in water filters. The researchers hypothesized that this means plastic is getting into the water both from the bottle and from the filtration process.

Researchers don’t yet know how dangerous tiny plastics are for human health. In a large review published in 2019, the World Health Organization said there wasn’t enough firm evidence linking microplastics in water to human health, but described an urgent need for further research. In theory, nanoplastics are small enough to make it into a person’s blood, liver and brain. And nanoplastics are likely to appear in much larger quantities than microplastics — in the new research, 90 percent of the plastic particles found in the sample were nanoplastics, and only 10 percent were larger microplastics. Finding a connection between microplastics and health problems in humans is complicated — there are thousands of types of plastics, and over 10,000 chemicals used to manufacture them. But at a certain point, [...] policymakers and the public need to prepare for the possibility that the tiny plastics in the air we breathe, the water we drink and the clothes we wear have serious and dangerous effects.

The Internet

How AI-Generated Content Could Fuel a Migration From Social Media to Independent 'Authored' Content (niemanlab.org) 68

The chief content officer for New York's public radio station WNYC predicts an "AI-fueled shift to niche community and authored excellence."

And ironically, it will be fueled by "Greedy publishers and malicious propagandists... flooding the web with fake or just mediocre AI-generated 'content'" which will "spotlight and boost the value of authored creativity." And it may help give birth to a new generation of independent media. Robots will make the internet more human.

First, it will speed up our migration off of big social platforms to niche communities where we can be better versions of ourselves. We're already exhausted by feeds that amplify our anxiety and algorithms that incentivize cruelty. AI will take the arms race of digital publishing shaped by algorithmic curation to its natural conclusion: big feed-based social platforms will become unending streams of noise. When we've left those sites for good, we'll miss the (mostly inaccurate) sense that we were seeing or participating in a grand, democratic town hall. But as we find places to convene where good faith participation is expected, abuse and harassment aren't, and quality is valued over quantity, we'll be happy to have traded a perception of scale influence for the experience of real connection.

Second, this flood of authorless "content" will help truly authored creativity shine in contrast... "Could a robot have done this?" will be a question we ask to push ourselves to be funnier, weirder, more vulnerable, and more creative. And for the funniest, the weirdest, the most vulnerable, and most creative: the gap between what they do and everything else will be huge. Finally, these AI-accelerated shifts will combine with the current moment in media economics to fuel a new era of independent media.

For a few years he's seen the rise of independent community-funded journalists, and "the list of thriving small enterprises is getting longer." He sees more growth in community-funding platforms (with subscription/membership features like on Substack and Patreon) which "continue to tilt the risk/reward math for audience-facing talent....

"And the amount of audience-facing, world-class talent that left institutional media in 2023 (by choice or otherwise) is unlike anything I've seen in more than 15 years in journalism... [I]f we're lucky, we'll see the creation of a new generation of independent media businesses whose work is as funny, weird, vulnerable and creative as its creators want it to be. And those businesses will be built on truly stable ground: a direct financial relationship with people who care.

"Thank the robots."
Microsoft

Microsoft Pulls the Plug on WordPad (theregister.com) 58

Microsoft has begun ditching WordPad from Windows and removed the editor from the first Canary Channel build of 2024. From a report: We knew it was coming, but the reality has arrived in the Canary Channel. A clean install will omit WordPad as of build 26020 of Windows 11. At an undisclosed point, the application will be removed on upgrade.

The People app is also being axed, as expected, and the Steps Recorder won't be getting any more updates and will instead show a banner encouraging users to try something else. Perhaps ClipChamp? WordPad was always an odd tool. Certainly not something one would want to edit text with, but not much of a word processor either. It feels like a throwback to a previous era. However, it was also free, came with Windows, and didn't insist on having a connection to the internet for it to work.

AI

AI-Assisted Bug Reports Are Seriously Annoying For Developers (theregister.com) 29

Generative AI models like Google Bard and GitHub Copilot are increasingly being used in various industries, but users often overlook their limitations, leading to serious errors and inefficiencies. Daniel Stenberg of curl and libcurl highlights a specific problem of AI-generated security reports: when reports are made to look better and to appear to have a point, it takes a longer time to research and eventually discard it. "Every security report has to have a human spend time to look at it and assess what it means," adds Stenberg. "The better the crap, the longer time and the more energy we have to spend on the report until we close it." The Register reports: The curl project offers a bug bounty to security researchers who find and report legitimate vulnerabilities. According to Stenberg, the program has paid out over $70,000 in rewards to date. Of 415 vulnerability reports received, 64 have been confirmed as security flaws and 77 have been deemed informative -- bugs without obvious security implications. So about 66 percent of the reports have been invalid. The issue for Stenberg is that these reports still need to be investigated and that takes developer time. And while those submitting bug reports have begun using AI tools to accelerate the process of finding supposed bugs and writing up reports, those reviewing bug reports still rely on human review. The result of this asymmetry is more plausible-sounding reports, because chatbot models can produce detailed, readable text without regard to accuracy.

As Stenberg puts it, AI produces better crap. "A crap report does not help the project at all. It instead takes away developer time and energy from something productive. Partly because security work is considered one of the most important areas so it tends to trump almost everything else." As examples, he cites two reports submitted to HackerOne, a vulnerability reporting community. One claimed to describe Curl CVE-2023-38545 prior to actual disclosure. But Stenberg had to post to the forum to make clear that the bug report was bogus. He said that the report, produced with the help of Google Bard, "reeks of typical AI style hallucinations: it mixes and matches facts and details from old security issues, creating and making up something new that has no connection with reality." [...]

Stenberg readily acknowledges that AI assistance can be genuinely helpful. But he argues that having a human in the loop makes the use and outcome of AI tools much better. Even so, he expects the ease and utility of these tools, coupled with the financial incentive of bug bounties, will lead to more shoddy LLM-generated security reports, to the detriment of those on the receiving end.

AI

ChatGPT Bombs Test On Diagnosing Kids' Medical Cases With 83% Error Rate (arstechnica.com) 70

An anonymous reader quotes a report from Ars Technica: ChatGPT is still no House, MD. While the chatty AI bot has previously underwhelmed with its attempts to diagnose challenging medical cases -- with an accuracy rate of 39 percent in an analysis last year -- a study out this week in JAMA Pediatrics suggests the fourth version of the large language model is especially bad with kids. It had an accuracy rate of just 17 percent when diagnosing pediatric medical cases. The low success rate suggests human pediatricians won't be out of jobs any time soon, in case that was a concern. As the authors put it: "[T]his study underscores the invaluable role that clinical experience holds." But it also identifies the critical weaknesses that led to ChatGPT's high error rate and ways to transform it into a useful tool in clinical care. With so much interest and experimentation with AI chatbots, many pediatricians and other doctors see their integration into clinical care as inevitable. [...]

For ChatGPT's test, the researchers pasted the relevant text of the medical cases into the prompt, and then two qualified physician-researchers scored the AI-generated answers as correct, incorrect, or "did not fully capture the diagnosis." In the latter case, ChatGPT came up with a clinically related condition that was too broad or unspecific to be considered the correct diagnosis. For instance, ChatGPT diagnosed one child's case as caused by a branchial cleft cyst -- a lump in the neck or below the collarbone -- when the correct diagnosis was Branchio-oto-renal syndrome, a genetic condition that causes the abnormal development of tissue in the neck, and malformations in the ears and kidneys. One of the signs of the condition is the formation of branchial cleft cysts. Overall, ChatGPT got the right answer in just 17 of the 100 cases. It was plainly wrong in 72 cases, and did not fully capture the diagnosis of the remaining 11 cases. Among the 83 wrong diagnoses, 47 (57 percent) were in the same organ system.

Among the failures, researchers noted that ChatGPT appeared to struggle with spotting known relationships between conditions that an experienced physician would hopefully pick up on. For example, it didn't make the connection between autism and scurvy (Vitamin C deficiency) in one medical case. Neuropsychiatric conditions, such as autism, can lead to restricted diets, and that in turn can lead to vitamin deficiencies. As such, neuropsychiatric conditions are notable risk factors for the development of vitamin deficiencies in kids living in high-income countries, and clinicians should be on the lookout for them. ChatGPT, meanwhile, came up with the diagnosis of a rare autoimmune condition. Though the chatbot struggled in this test, the researchers suggest it could improve by being specifically and selectively trained on accurate and trustworthy medical literature -- not stuff on the Internet, which can include inaccurate information and misinformation. They also suggest chatbots could improve with more real-time access to medical data, allowing the models to refine their accuracy, described as "tuning."

Submission + - ChatGPT Bombs Test On Diagnosing Kids' Medical Cases With 83% Error Rate (arstechnica.com)

An anonymous reader writes: ChatGPT is still no House, MD. While the chatty AI bot has previously underwhelmed with its attempts to diagnose challenging medical cases—with an accuracy rate of 39 percent in an analysis last year—a study out this week in JAMA Pediatrics suggests the fourth version of the large language model is especially bad with kids. It had an accuracy rate of just 17 percent when diagnosing pediatric medical cases. The low success rate suggests human pediatricians won't be out of jobs any time soon, in case that was a concern. As the authors put it: "[T]his study underscores the invaluable role that clinical experience holds." But it also identifies the critical weaknesses that led to ChatGPT's high error rate and ways to transform it into a useful tool in clinical care. With so much interest and experimentation with AI chatbots, many pediatricians and other doctors see their integration into clinical care as inevitable. [...]

For ChatGPT's test, the researchers pasted the relevant text of the medical cases into the prompt, and then two qualified physician-researchers scored the AI-generated answers as correct, incorrect, or "did not fully capture the diagnosis." In the latter case, ChatGPT came up with a clinically related condition that was too broad or unspecific to be considered the correct diagnosis. For instance, ChatGPT diagnosed one child's case as caused by a branchial cleft cyst—a lump in the neck or below the collarbone—when the correct diagnosis was Branchio-oto-renal syndrome, a genetic condition that causes the abnormal development of tissue in the neck, and malformations in the ears and kidneys. One of the signs of the condition is the formation of branchial cleft cysts. Overall, ChatGPT got the right answer in just 17 of the 100 cases. It was plainly wrong in 72 cases, and did not fully capture the diagnosis of the remaining 11 cases. Among the 83 wrong diagnoses, 47 (57 percent) were in the same organ system.

Among the failures, researchers noted that ChatGPT appeared to struggle with spotting known relationships between conditions that an experienced physician would hopefully pick up on. For example, it didn't make the connection between autism and scurvy (Vitamin C deficiency) in one medical case. Neuropsychiatric conditions, such as autism, can lead to restricted diets, and that in turn can lead to vitamin deficiencies. As such, neuropsychiatric conditions are notable risk factors for the development of vitamin deficiencies in kids living in high-income countries, and clinicians should be on the lookout for them. ChatGPT, meanwhile, came up with the diagnosis of a rare autoimmune condition. Though the chatbot struggled in this test, the researchers suggest it could improve by being specifically and selectively trained on accurate and trustworthy medical literature—not stuff on the Internet, which can include inaccurate information and misinformation. They also suggest chatbots could improve with more real-time access to medical data, allowing the models to refine their accuracy, described as "tuning."

Submission + - DVD resurgence to prevent films from disappearing (bbc.com)

smooth wombat writes: The advent of streaming services heralded a new era of movie watching. No longer tied to an inconvenient time at a theater, movies could now be watched at your convenience any time of the day or night in your own home. However, with that convenience comes a sinister side: those same movies disappearing from streaming services. Once the movie is removed from the streaming service you can't watch it again. As a result, more people, particularly younger people, are buying DVDs, and even records, to preserve their ability to watch and listen to what they want when they want. Before his release of Oppenheimer, Christopher Nolan encouraged fans to embrace "a version you can buy and own at home and put on a shelf so no evil streaming service can come steal it from you". From the BBC article:

Other directors have chimed in to sing the praises of physical media. James Cameron told Variety:"The streamers are denying us any access whatsoever to certain films. And I think people are responding with their natural reaction, which is 'I'm going to buy it, and I'm going to watch it any time I want.'"

Guillermo del Toro posted on X that "If you own a great 4K HD, Blu-ray, DVD etc etc of a film or films you love... you are the custodian of those films for generations to come." His tweet prompted people to reply, sharing evidence of their vast DVD collections.

There will always be fans who want to own everything they can by a favourite artist or director, but another factor is an increasing fear over how much – or rather, how little – control we have over the content we stream. With so many streaming services at our fingertips, it's easy to assume that we can watch any film we want, any time we want, subscription depending. But there are many films that don't seem to exist online. In the UK, you won't find David Lynch's seminal debut Eraserhead available to stream. In the US, one New York Times writer recently told of her difficulty in trying to watch her favourite childhood movie, Britney Spears' Crossroads. Nineties pop fans wanting to indulge in a spot of nostalgia with Spice World will struggle to find it in the US.

Even films that are available could disappear at any moment, as streaming services reevaluate their content libraries or remove titles due to licensing agreements. And when you pay to purchase a digital version of a film or TV show, as opposed to renting it or watching it via a streaming subscription, you still don't "own" it – you've just purchased a licence to watch it. And, of course, when everything is on the cloud, we are at the mercy of a stable internet connection.

It was a problem that the film collector Lucas Henkel kept encountering. "I realised that many of the movies I enjoy are not really available on streaming services, or they disappear frequently, so the only way to see them reliably is through physical media," he tells BBC Culture. So Henkel decided to set up his own boutique home entertainment distribution label, Celluloid Dreams. "As a collector myself, it has a lot to do with the desire to own something tangible," says Henkel, explaining his own commitment to physical media. "More importantly, it guarantees access. I can pull out a 20-year old DVD and play it any day I want. No restrictions, no extra fees, no subscriptions just insert the disc and press play. Seriously, what's not to like about that? And no streaming service can match the quality of a presentation coming from a physical medium."

The Internet

Is the Internet About to Get Weird Again? (rollingstone.com) 83

Long-time tech entrepreneur Anil Dash predicts a big shift in the digital landscape in 2024. And "regular internet users — not just the world's tech tycoons — may be the ones who decide how it goes." The first thing to understand about this new era of the internet is that power is, undoubtedly, shifting. For example, regulators are now part of the story — an ironic shift for anyone who was around in the dot com days. In the E.U., tech giants like Apple are being forced to hold their noses and embrace mandated changes like opening up their devices to allow alternate app stores to provide apps to consumers. This could be good news, increasing consumer choice and possibly enabling different business models — how about mobile games that aren't constantly pestering gamers for in-app purchases? Back in the U.S., a shocking judgment in Epic Games' (that's the Fortnite folks') lawsuit against Google leaves us with the promise that Android phones might open up in a similar way.

That's not just good news for the billions of people who own smartphones. It's part of a sea change for the coders and designers who build the apps, sites, and games we all use. For an entire generation, the imagination of people making the web has been hemmed in by the control of a handful of giant companies that have had enormous control over things like search results, or app stores, or ad platforms, or payment systems. Going back to the more free-for-all nature of the Nineties internet could mean we see a proliferation of unexpected, strange new products and services. Back then, a lot of technology was created by local communities or people with a shared interest, and it was as likely that cool things would be invented by universities and non-profits and eccentric lone creators as they were to be made by giant corporations....

In that era, people could even make their own little social networks, so the conversations and content you found on an online forum or discussion were as likely to have been hosted by the efforts of one lone creator than to have come from some giant corporate conglomerate. It was a more democratized internet, and while the world can't return to that level of simplicity, we're seeing signs of a modern revisiting of some of those ideas.

Dash's article (published in Rolling Stone) ends with examples of "people who had been quietly keeping the spirit of the human, personal, creative internet alive...seeing a resurgence now that the web is up for grabs again. "
  • The School for Poetic Computation (which Dash describes as "an eccentric, deeply charming, self-organized school for people who want to combine art and technology and a social conscience.")
  • Mask On Zone, "a collaboration with the artist and coder Ritu Ghiya, which gives demonstrators and protesters in-context guidance on how to avoid surveillance."

Dash concludes that "We're seeing the biggest return to that human-run, personal-scale web that we've witnessed since the turn of the millennium, with enough momentum that it's likely that 2024 is the first year since then that many people have the experience of making a new connection or seeing something go viral on a platform that's being run by a regular person instead of a commercial entity.

"It's going to make a lot of new things possible..."

A big thank-you for submitting the article to long-time Slashdot reader, DrunkenTerror.


China

That Chinese Spy Balloon Used an American ISP to Communicate, Say US Officials (nbcnews.com) 74

NBC News reports that the Chinese spy balloon that flew across the U.S. in February "used an American internet service provider to communicate, according to two current and one former U.S. official familiar with the assessment."

it used the American ISP connection "to send and receive communications from China, primarily related to its navigation." Officials familiar with the assessment said it found that the connection allowed the balloon to send burst transmissions, or high-bandwidth collections of data over short periods of time.

The Biden administration sought a highly secretive court order from the federal Foreign Intelligence Surveillance Court to collect intelligence about it while it was over the U.S., according to multiple current and former U.S. officials. How the court ruled has not been disclosed. Such a court order would have allowed U.S. intelligence agencies to conduct electronic surveillance on the balloon as it flew over the U.S. and as it sent and received messages to and from China, the officials said, including communications sent via the American internet service provider...

The previously unreported U.S. effort to monitor the balloon's communications could be one reason Biden administration officials have insisted that they got more intelligence out of the device than it got as it flew over the U.S. Senior administration officials have said the U.S. was able to protect sensitive sites on the ground because they closely tracked the balloon's projected flight path. The U.S. military moved or obscured sensitive equipment so the balloon could not collect images or video while it was overhead.

NBC News is not naming the internet service provider, but says it denied that the Chinese balloon had used its network, "a determination it said was based on its own investigation and discussions it had with U.S. officials." The balloon contained "multiple antennas, including an array most likely able to collect and geolocate communications," according to reports from a U.S. State Depratment official cited by NBC News in February. "It was also powered by enormous solar panels that generated enough power to operate intelligence collection sensors, the official said.

Reached for comment this week, a spokesperson for the Chinese Embassy in Washington told NBC News that the balloon was just a weather balloon that had accidentally drifted into American airspace.
Android

Beeper's iMessage Connection Software Open Sourced. What Happens Next? (cnet.com) 85

"The iMessage connection software that powers Beeper Mini and Beeper Cloud is now 100% open source," Beeper announced late this week. " Anyone who wants can use it or continue development."

But while Beeper says it's done trying to bring iMessage to Android, CNET reports that the whole battle was "deeply tied" to Apple's ongoing strategy to control the mobile market: The tide seems to be changing, however: Apple said last month it would be opening up its Messages app (likely due to European regulation) to work with the newer, more feature-rich texting protocol called RCS. This hopefully will lead to a more modern and secure messaging experience when texting between an iPhone and an Android phone, and lead away from the aging SMS and MMS standards. Unfortunately, green bubbles will continue to persist even if there might be little to no functional difference. While third-party apps like Nothing Chats attempted and ultimately failed to bring iMessage to Android, Apple will likely never release the app on Google's mobile operating system.

Until RCS is fully adopted, companies are creating services to allow access to iMessage via Android phones. Apple, for its part, has been quick to block apps like Beeper Mini, citing security concerns. This, however, is raising eyebrows from lawmakers regarding competition in the messaging space and Apple's tight control over the market...

Beeper in a December 21 blog post told users to grab a jailbroken iPhone and install a free Beeper tool that'll generate iMessage registration codes to keep the service operational. It's such a roundabout and potentially expensive way of trying to get iMessage on Android that it likely won't be worth it for most people. For those not willing to go out and jailbreak an iPhone, Beeper said in a now-deleted blog post that it would allow people to rent a jailbroken unit for a small monthly fee starting next year.

Education

Microsoft President Brad Smith Quietly Leaves Board of Nonprofit Code.org 4

Longtime Slashdot reader theodp writes: Way back in September 2012, Microsoft President Brad Smith discussed the idea of "producing a crisis" to advance Microsoft's "two-pronged" National Talent Strategy to increase K-12 CS education and the number of H-1B visas. Not long thereafter, the tech-backed nonprofit Code.org (which promotes and provides K-12 CS education and is led by Smith's next-door neighbor) and Mark Zuckerberg's FWD.us PAC (which lobbied for H-1B reform) were born, with Smith on board both. Over the past 10+ years, Smith has played a key role in establishing Code.org's influence in the new K-12 CS education "grassroots" movement, including getting buy-in from three Presidential administrations -- Obama, Trump, and Biden -- as well as the U.S. Dept. of Education and the nation's Governors.

But after recent updates, Code.org's Leadership page now indicates that Smith has quietly left Code.org's Board of Directors and thanks him for his past help and advice. Since November (when archive.org indicates Smith's photo was yanked from Code.org's Leadership page), Smith has been in the news in conjunction with Microsoft's relationship with another Microsoft-bankrolled nonprofit, OpenAI, which has come under scrutiny by the Feds and in the UK. Smith, who noted he and Microsoft helped OpenAI and CEO Sam Altman craft messaging ahead of a White House meeting, announced in a Dec. 8th tweet that Microsoft will be getting a non-voting OpenAI Board seat in connection with Altman's return to power (who that non-voting Microsoft OpenAI board member will be has not been announced).

OpenAI, Microsoft, and Code.org teamed up in December to provide K-12 CS+AI tutorials for this December's AI-themed Hour of Code (the trio has also partnered with Amazon and Google on the Code.org-led TeachAI initiative). And while Smith has left Code.org's Board, Microsoft's influence there will live on as Microsoft CTO Kevin Scott -- credited for forging Microsoft's OpenAI partnership -- remains a Code.org Board member together with execs from other Code.org Platinum Supporters ($3+ million in past 2 years) Google and Amazon.

Slashdot Top Deals