180946068
submission
BrianFagioli writes:
Beelink has announced a new series of mini PCs that come with the OpenClaw AI environment preinstalled, aiming to make running local AI models less painful. Instead of requiring users to install Linux, configure drivers, and assemble an inference stack themselves, the systems ship with everything already set up. The machines are offered in a bright âoeLobster Redâ chassis and include models designed for either local large language model inference or cloud-based AI services.
Some configurations focus on running models locally, with Beelink claiming its GTR9 Pro powered by AMD AI Max+ 395 can deliver roughly 52 tokens per second on the GPT OSS 120B model. Other systems are aimed at developers who prefer cloud access to models such as GPT 4o, Claude, and Gemini. The company is also offering SSD upgrade kits preloaded with Ubuntu and OpenClaw so existing Beelink systems can gain AI functionality without replacing the hardware.
180920150
submission
BrianFagioli writes:
Apple has a maddening bug that has persisted across well over a decade of iOS releases. When you long-press an email address on iPhone or iPad and choose âoeCopy Email Address,â what ends up on your clipboard is not just the email address — it is âoemailto:someone@example.com.â A URI prefix you did not ask for, tacked onto something the menu explicitly told you it was copying correctly. Paste it somewhere that does not strip it automatically and you have a problem.
The astonishing part is not the bug itself — it is that Apple has never fixed it. This is a company that has overhauled its chip architecture, shipped new programming languages, and added satellite connectivity to a phone. But âoeCopy Email Addressâ still does not copy just the email address. The fix is trivially simple: strip the prefix when the user picks that option. Apple knows this is broken. At some point, leaving it unfixed stops being an oversight and starts being a choice.ââââââââââââââââ
180914820
submission
BrianFagioli writes:
IBM and a team of international researchers have created a never-before-seen molecule with electrons traveling in a corkscrew-like pattern, demonstrating its unique properties using a quantum computer.
180901112
submission
BrianFagioli writes:
Seagate says it is now shipping its Mozaic 4+ HAMR-based hard drives at up to 44TB per drive, with production deployments already underway at two hyperscale cloud providers. The company claims the platform is the only heat-assisted magnetic recording implementation currently operating at scale, and it is targeting a path from todayâ(TM)s 4+TB per disk toward 10TB per disk, eventually enabling 100TB-class drives. In a one-exabyte deployment, Seagate estimates Mozaic could improve infrastructure efficiency by roughly 47 percent compared to standard 30TB drives, cutting both footprint and energy consumption.
While GPUs dominate AI headlines, large-scale storage remains the economic backbone of training and archival workloads. HAMR uses a tiny laser to heat the disk surface during writes, allowing higher areal density without sacrificing stability. With most major cloud storage providers reportedly qualified on the Mozaic platform, Seagate is positioning spinning disks, not flash, as the long-term answer for cost-effective AI-scale data growth.
180858494
submission
BrianFagioli writes:
A new Coursera survey of more than 4,200 students and educators across five countries finds that AI is now deeply embedded in higher education. Eighty percent of students say AI has positively supported their learning experience, and 70 percent believe it will improve exam performance and overall quality. Yet only 20 percent of U.S. educators report that their university has a formal AI policy in place, while roughly half of respondents say the higher education system is unprepared to manage AIâ(TM)s impact.
Concerns about academic integrity remain strong. Sixty-five percent believe unregulated AI could undermine degree credibility, 37 percent worry about increased plagiarism, and 24 percent of students admit to submitting AI-generated work without disclosure. Only 27 percent of educators feel confident identifying AI-generated content, and just 28 percent say AI literacy has been incorporated into the curriculum, highlighting a widening gap between student adoption and institutional governance.
180855498
submission
BrianFagioli writes:
New research from Florida International University suggests that simply disclosing AI use can damage a creatorâ(TM)s reputation, even when the creative output itself is identical. In one experiment, participants evaluated the same video game soundtrack but were given different descriptions of the composer. Some were told it was written by Hans Zimmer, while others were told it came from an unknown student. When AI collaboration was disclosed, ratings dropped across the board, regardless of whether the name attached carried prestige.
The study found that reputation offered only limited protection. Participants were slightly more willing to believe a well known composer remained in control of the creative process, but overall perceptions of authenticity and competence still declined. Researchers say the issue is not performance quality but perception. Once AI enters the picture, audiences begin questioning whether the creativity is genuine, suggesting that, at least for now, AI carries a reputational tax.
180834430
submission
BrianFagioli writes:
The Salvation Army has launched what it calls the worldâ(TM)s first digital thrift store inside Roblox, an experience named Thrift Score that lets players browse virtual racks and buy digital fashion for their avatars, and while I understand the strategy of meeting Gen Z and Gen Alpha where they already spend time and money, I cannot help but feel uneasy about turning something that, in the real world, often serves low income families in genuine need into a gamified aesthetic inside a video game, even if proceeds support rehabilitation and community programs, because a thrift store is not just a quirky brand concept but a lifeline for many people, and packaging that reality as entertainment creates a strange disconnect that is hard to ignore.
180829064
submission
BrianFagioli writes:
Security researchers at ESET say they have uncovered what appears to be the first Android malware strain to integrate generative AI directly into its execution flow. The malware, dubbed PromptSpy, abuses Googleâ(TM)s Gemini model to interpret on screen UI elements in real time and generate step by step interaction instructions. Instead of relying on hardcoded taps or fixed coordinates, PromptSpy feeds Gemini an XML dump of the current screen and receives JSON formatted actions in return. The goal is persistence. It uses AI generated guidance to lock itself into the recent apps list, making it harder for users to swipe it away or kill the process.
Beyond the AI assisted persistence trick, PromptSpy includes a built in VNC module that gives attackers full remote access once Accessibility permissions are granted. Operators can view the screen, perform gestures, capture lockscreen credentials, record video, and take screenshots. Distribution appears to have occurred outside Google Play via a banking themed lure targeting Spanish speaking users, with code artifacts suggesting development in a Chinese speaking environment. While Google Play Protect is enabled by default on certified devices, the discovery highlights a shift toward AI assisted malware that can dynamically adapt to different Android skins and device layouts.
180827746
submission
BrianFagioli writes:
New research unveiled at the National Religious Broadcasters International Christian Media Convention suggests AI is moving beyond productivity tool status and into the spiritual lives of Americans. According to data from Barna Group and Gloo, nearly one in three U.S. adults now say spiritual advice from AI is as trustworthy as advice from a priest or pastor. Among Gen Z and Millennials, that figure rises to roughly two in five. At the same time, about 41 percent of pastors report using AI for Bible study, while only 12 percent say they feel comfortable teaching about it, creating a widening gap between congregational experimentation and clerical confidence.
The findings also show that roughly four in ten practicing Christians say AI has helped with prayer, Bible study, or spiritual growth, and one third want guidance from clergy on how to navigate the technology. Researchers frame the moment as an opportunity for faith leaders to address how AI should be integrated responsibly rather than ignored. With trust in mainstream media declining and Christian media still viewed as valuable by many Americans, the broader question emerging from the data is whether AI will remain a study aid or evolve into a de facto spiritual authority in a digitally mediated religious landscape.
180808484
submission
BrianFagioli writes:
The Vatican is marking the 400th anniversary of St. Peterâ(TM)s Basilica with an unexpected modern addition: artificial intelligence. During the commemorative year, which begins February 20 and concludes November 18 with a Mass celebrated by Pope Leo XIV, pilgrims will be able to access real-time, AI-assisted translations of major liturgies inside the Basilica. By scanning QR codes placed throughout the church, visitors can open a webpage offering live audio and text translations in their chosen language, powered by the Lara AI interpreting platform developed by Translated. No dedicated app or special hardware is required.
Cardinal Mauro Gambetti said the anniversary is not simply about recalling a date, but about âoebringing back to the heartâ what gives life and hope. Alongside the AI translation system, the Vatican is introducing a digital booking tool called Smart Pass to manage visitor flow, and a structural monitoring project dubbed âoeBeyond the Visibleâ to safeguard the Basilicaâ(TM)s stability. The effort signals a cautious but clear embrace of modern technology inside one of Christianityâ(TM)s most historic spaces, raising interesting questions about how AI may continue to intersect with religious practice and global access to liturgy.
180794538
submission
BrianFagioli writes:
Microsoft recently suggested that artificial intelligence could automate most white collar tasks within the next 12 to 18 months, a timeline that implies disruption across law, finance, marketing, and software development. If accurate, that forecast is less about incremental productivity gains and more about structural change in professional work. AI systems are already drafting contracts, writing code, analyzing data, and generating reports at scale. The trajectory is real, and the capabilities are improving quickly.
The harder question is why Microsoft continues accelerating enterprise AI tools while acknowledging the potential for widespread displacement. If leadership believes this disruption is imminent, what responsibility does the company have to explain how workers transition, who absorbs the economic shock, and why the long-term benefits justify the short-term harm? Is this augmentation, elimination, or simply competitive inevitability?
180771706
submission
BrianFagioli writes:
The Linux Mint developers say they are considering adopting a longer development cycle, arguing that the projectâ(TM)s current six month cadence plus LMDE releases leaves too little room for deeper work. In a recent update, the team reflected on its incremental philosophy, independence from upstream decisions like Snap, and heavy investment in Cinnamon and XApp. While the release process âoeworks very wellâ and delivers steady improvements, they admit it consumes significant time in testing, fixing, and shipping, potentially capping ambition.
Mintâ(TM)s next release will be based on a new Ubuntu LTS, and the team says it is seriously interested in stretching the development window. The stated goal is to free up resources for more substantial development rather than constant release management. Whether this signals bigger technical changes or simply acknowledges bandwidth limits for a small team remains unclear, but it marks a notable rethink of one of desktop Linuxâ(TM)s most consistent release rhythms.
180766964
submission
BrianFagioli writes:
New research from McAfee suggests romance scams are no longer edge cases on dating apps, but a routine part of the experience. One in four Americans reports encountering a fake profile or AI-driven bot, with scammers increasingly relying on automation rather than obvious phishing links. McAfee Labs observed users receiving dozens of unsolicited messages in short periods, even without profile photos, indicating bots are casting wide nets and waiting for emotional engagement rather than targeting carefully curated victims.
While traditional dating-app-themed malicious URLs declined year over year, the data suggests scammers are simply changing tactics. QR codes, cloned mobile apps, and long-form conversational scams appear to be replacing crude link-based attacks. Losses remain highly gendered, with men far more likely to report financial harm and larger dollar losses, while emotional damage cuts across age groups. The takeaway is less about malware and more about psychology: trust is built first, money comes later, and AI is making that process faster, cheaper, and harder to detect.
180760518
submission
BrianFagioli writes:
AV-Comparatives has released its Security Survey 2026, based on responses from more than 1,300 participants across 87 countries, offering a look at how security conscious users are actually protecting their systems. Windows 11 now dominates the desktop among respondents, but most users still do not rely on the operating system alone. Paid third party security software remains the norm, with familiar vendors like Bitdefender, Kaspersky, ESET, and Microsoftâ(TM)s own tools topping the list. The data suggests trust and reputation continue to matter more than cost or convenience.
What stands out is Linuxâ(TM)s growing credibility. Usage among respondents is now roughly on par with macOS, signaling that Linux is no longer just a niche choice for hobbyists in security circles. At the same time, respondents expressed growing concern about state linked cyber threats, most frequently naming Russia and China, while also voicing unease about domestic surveillance within their own countries. Together, the results suggest cybersecurity in 2026 is as much about trust in platforms and institutions as it is about malware and exploits.
180757642
submission
BrianFagioli writes:
A reported $6 million bitcoin ransom connected to the alleged kidnapping of Savannah Guthrie’s mother has dragged cryptocurrency back into an uncomfortable spotlight. While the priority should be empathy for a family facing an unthinkable situation, the use of bitcoin once again reinforces public fears about crypto being the payment method of choice for extortion and serious crime.
Supporters argue that bitcoin is traceable and that crime long predates digital currency, but repeated headlines like this keep eroding trust. As crypto continues to appear in ransomware attacks and now alleged kidnapping cases, the debate is shifting from regulation to a more blunt question: whether bitcoin and other cryptocurrencies should be banned or heavily restricted before their downsides outweigh their promised benefits.