Forgot your password?
typodupeerror

Submission Summary: 0 pending, 15 declined, 0 accepted (15 total, 0.00% accepted)

Submission + - De-identified public data can be linked to social media profiles (acm.org)

rezoG writes: Abstract

Can online trackers and network adversaries de-anonymize web browsing data readily available to them? We show---theoretically, via simulation, and through experiments on real user data---that de-identified web browsing histories can be linked to social media profiles using only publicly available data. Our approach is based on a simple observation: each person has a distinctive social network, and thus the set of links appearing in one's feed is unique. Assuming users visit links in their feed with higher probability than a random user, browsing histories contain tell-tale marks of identity. We formalize this intuition by specifying a model of web browsing behavior and then deriving the maximum likelihood estimate of a user's social profile. We evaluate this strategy on simulated browsing histories, and show that given a history with 30 links originating from Twitter, we can deduce the corresponding Twitter profile more than 50% of the time.To gauge the real-world effectiveness of this approach, we recruited nearly 400 people to donate their web browsing histories, and we were able to correctly identify more than 70% of them. We further show that several online trackers are embedded on sufficiently many websites to carry out this attack with high accuracy. Our theoretical contribution applies to any type of transactional data and is robust to noisy observations, generalizing a wide range of previous de-anonymization attacks. Finally, since our attack attempts to find the correct Twitter profile out of over 300 million candidates, it is---to our knowledge---the largest-scale demonstrated de-anonymization to date.

Submission + - The Silent Erosion: Global Generational Cognitive Decline in the Age of AI ... (hal.science)

rezoG writes: Research at the R. C. Patel Institute of Technology, Administration, Shirpur, India, indicates that broad adoption and incorporation of A.I. into the social sphere is not without some costs.

The researchers explored the extent of cognitive decline related to A.I.-reliance across multiple nations. It is concluded (tentatively) that widespread A.I. entwinement within socio-technical spheres does result in cognitive atrophy, degeneration. The effects were found to be more pronounced in nations with higher A.I. adoption, lesser in nations with a more responsible approach to pedagogy, inclusion of A.I. in the lifestyles.

This study formalizes the concept of national cognitive resilience as the capacity of a country to preserve and regenerate metacognitive friction, epistemic novelty density, and human interpretive effort despite increasing AI integration. In the AI era, the boundaries between thinking and automation, learning and consumption, and originality and replication are quietly dissolving. The silent consequence is GCA, the recursive weakening of human epistemic agency through sustained reliance on AI-mediated knowledge production. This erosion is not accidental; it arises from design logics that privilege frictionless efficiency and predictive optimization over ambiguity, novelty, and reflection.

The results show clearly that AI readiness does not equate to cognitive resilience. Nations with advanced AI infrastructure, such as China and the United States, may record low CDI scores when novelty is eroded and automation reliance is high. Conversely, countries like Singapore, and Finland outside this dataset, sustain high CDI scores by embedding cognitive scaffolding into their educational and governance systems, even with similar AI maturity levels. Historical precedents such as the Gutenberg press, industrial schooling, and the screen revolution demonstrate that cognitive systems can recover from compression and conformity, but only through deliberate pedagogical reform and institutional innovation.

The broader implication is that cognition must be treated as a civilizational asset, equal in strategic importance to ecological sustainability or democratic stability. Just as environmental degradation prompted coordinated global climate action, cognitive degradation demands epistemic governance that is transdisciplinary, generational, and anticipatory. Embedding GCAL–CDI metrics into AI policy, designing reflective human–AI interfaces, and prioritizing epistemic plurality over algorithmic conformity are essential to achieving this. Artificial intelligence need not be inherently corrosive.


Submission + - The Ironies of Artificial Intelligence (tandfonline.com)

rezoG writes: "After 40 years, Bainbridge's keen observations continue to hold true as the use of automation has increased across many domains, including aviation, air traffic control, automated process control, drilling, and transportation systems."

Over 40 years ago, Lisanne Bainbridge pointed out some paradoxical results that AI adoption could bring, with automation (more generally) conceivably resulting in more work for the humans.

Lisanne Bainbridge's 1983 paper, the Ironies of Automation (Bainbridge 1983), was a telling and prescient summary of the many challenges that arise from automation. She pointed out the ways in which automation, paradoxically, make the human's job more crucial and more difficult, rather than easier and less essential as so many engineers believe. Not only does automation introduce new design errors into the control of systems, but it creates very different jobs that have many new problems, with the result that people may be less able to perform when needed. They need to be more skilled to understand and operate the automation, while simultaneously the automation leads to skill atrophy. Additional system complexity is introduced as well as vigilance problems that interfere with peoples' ability to oversee the automation. And while manual workload may be decreased much of the time, cognitive workload is often increased at critical times.

https://www.tandfonline.com/do...

Submission + - Safe Ultrasound-based Neuromodulation - via Headset (medrxiv.org)

rezoG writes: University of Arizona scientists were able to deliver (safely) low-intensity focused ultrasound (LIFU) to human brains — through the skull — via a head-worn device. The device is attached to the head via straps and an array of transducers emits ultrasound, which can be directed to specific areas of the brain, resulting in modulation of the activity thereof. The resulting effects can then be leveraged for clinical purposes.

Transcranial Low-Intensity Focused Ultrasound (LIFU) offers unique opportunities for precisely neuromodulating small and/or deep targets within the human brain, which may be useful for treating psychiatric and neurological disorders. This paper presents a novel ultrasound system that delivers focused ultrasound through the forehead to anterior brain targets and evaluates its safety and usability in a volunteer study. ... The presented system was successfully used to safely deliver LIFU through the forehead to the amPFC in all volunteers, and was well-tolerated. With the capabilities validated here and positive results of the study, this technology appears well-suited to explore LIFU’s efficacy in clinical neuromodulation contexts.


Submission + - The Ironies of Artificial Intelligence (tandfonline.com)

rezoG writes: Over 40 years ago, Lisanne Bainbridge pointed out some paradoxical results that AI adoption could bring, with automation (more generally) conceivably resulting in more work for humans.

Lisanne Bainbridge's 1983 paper, the Ironies of Automation (Bainbridge 1983), was a telling and prescient summary of the many challenges that arise from automation. She pointed out the ways in which automation, paradoxically, make the human's job more crucial and more difficult, rather than easier and less essential as so many engineers believe. Not only does automation introduce new design errors into the control of systems, but it creates very different jobs that have many new problems, with the result that people may be less able to perform when needed. They need to be more skilled to understand and operate the automation, while simultaneously the automation leads to skill atrophy. Additional system complexity is introduced as well as vigilance problems that interfere with peoples' ability to oversee the automation. And while manual workload may be decreased much of the time, cognitive workload is often increased at critical times.

"After 40 years, Bainbridge's keen observations continue to hold true as the use of automation has increased across many domains, including aviation, air traffic control, automated process control, drilling, and transportation systems."

Submission + - The Ironies of Artificial Intelligence (tandfonline.com)

rezoG writes: Over 40 years ago, Lisanne Bainbridge pointed out some paradoxical results that AI adoption could bring, with automation more generally conceivably resulting in more work for humans.

Lisanne Bainbridge's 1983 paper, the Ironies of Automation (Bainbridge 1983), was a telling and prescient summary of the many challenges that arise from automation. She pointed out the ways in which automation, paradoxically, make the human's job more crucial and more difficult, rather than easier and less essential as so many engineers believe. Not only does automation introduce new design errors into the control of systems, but it creates very different jobs that have many new problems, with the result that people may be less able to perform when needed. They need to be more skilled to understand and operate the automation, while simultaneously the automation leads to skill atrophy. Additional system complexity is introduced as well as vigilance problems that interfere with peoples' ability to oversee the automation. And while manual workload may be decreased much of the time, cognitive workload is often increased at critical times.

"After 40 years, Bainbridge's keen observations continue to hold true as the use of automation has increased across many domains, including aviation, air traffic control, automated process control, drilling, and transportation systems."

Submission + - The Ironies of Artificial Intelligence (tandfonline.com)

rezoG writes: Over 40 years ago, Lisanne Bainbridge pointed out some paradoxical results that AI adoption could bring, with automation more generally conceivably resulting in more work for humans.

Lisanne Bainbridge's 1983 paper, the Ironies of Automation (Bainbridge 1983), was a telling and prescient summary of the many challenges that arise from automation. She pointed out the ways in which automation, paradoxically, make the human's job more crucial and more difficult, rather than easier and less essential as so many engineers believe. Not only does automation introduce new design errors into the control of systems, but it creates very different jobs that have many new problems, with the result that people may be less able to perform when needed. They need to be more skilled to understand and operate the automation, while simultaneously the automation leads to skill atrophy. Additional system complexity is introduced as well as vigilance problems that interfere with peoples' ability to oversee the automation. And while manual workload may be decreased much of the time, cognitive workload is often increased at critical times.

"After 40 years, Bainbridge's keen observations continue to hold true as the use of automation has increased across many domains, including aviation, air traffic control, automated process control, drilling, and transportation systems."

Submission + - The Ironies of Artifiial Intelligence (tandfonline.com)

rezoG writes: Over 40 years ago, Lisanne Bainbridge pointed out some paradoxical results that AI adoption could bring, with automation more generally conceivably resulting in more work for humans.

Lisanne Bainbridge’s 1983 paper, the Ironies of Automation (Bainbridge 1983), was a telling and prescient summary of the many challenges that arise from automation. She pointed out the ways in which automation, paradoxically, make the human’s job more crucial and more difficult, rather than easier and less essential as so many engineers believe. Not only does automation introduce new design errors into the control of systems, but it creates very different jobs that have many new problems, with the result that people may be less able to perform when needed. They need to be more skilled to understand and operate the automation, while simultaneously the automation leads to skill atrophy. Additional system complexity is introduced as well as vigilance problems that interfere with peoples’ ability to oversee the automation. And while manual workload may be decreased much of the time, cognitive workload is often increased at critical times.

"After 40 years, Bainbridge’s keen observations continue to hold true as the use of automation has increased across many domains, including aviation, air traffic control, automated process control, drilling, and transportation systems."

Submission + - SPAM: Radar Used to Decode Silent Speech

rezoG writes: Korean scientists were able to decode 'silent speech' using radar placed near a "speaker's" head and machine-learning techniques — discerning silent utterances at the phoneme (a fundamental unit of spoken words) level. The system does not require physical contact with the speaker, and "[the] study achieved average classification accuracies of 86.47%, 81.59%, 88.95%, and 96.88% for the vowels, consonants, words, and phrases, respectively."

Abstract: Several sensing techniques have been proposed for silent speech recognition (SSR); however, many of these methods require invasive processes or sensor attachment to the skin using adhesive tape or glue, rendering them unsuitable for frequent use in daily life. By contrast, impulse radio ultra-wideband (IR-UWB) radar can operate without physical contact with users’ articulators and related body parts, offering several advantages for SSR. These advantages include high range resolution, high penetrability, low power consumption, robustness to external light or sound interference, and the ability to be embedded in space-constrained handheld devices. This study demonstrated IR-UWB radar-based contactless SSR using four types of speech stimuli (vowels, consonants, words, and phrases). To achieve this, a novel speech feature extraction algorithm specifically designed for IR-UWB radar-based SSR is proposed. Each speech stimulus is recognized by applying a classification algorithm to the extracted speech features. Two different algorithms, multidimensional dynamic time warping (MD-DTW) and deep neural network—hidden Markov model (DNN–HMM), were compared for the classification task. Additionally, a favorable radar antenna position, either in front of the user’s lips or below the user’s chin, was determined to achieve higher recognition accuracy. Experimental results demonstrated the efficacy of the proposed speech feature extraction algorithm combined with DNN–HMM for classifying vowels, consonants, words, and phrases. Notably, this study represents the first demonstration of phoneme-level SSR using contactless radar.


Link to Original Source

Submission + - ATM PINs Can Be Reconstructed From Hand Positions Even When Obscured (researchgate.net)

rezoG writes: University of Padova researchers used a machine learning algorithm to reconstruct PINs entered by "victims" on ATMs from the positions of the typing hands, even when the PIN pad was covered.

Automated Teller Machines (ATMs) represent the most used system for withdrawing cash. The European Central Bank reported more than 11 billion cash withdrawals and loading/unloading transactions on the European ATMs in 2019. Although ATMs have undergone various technological evolutions, Personal Identification Numbers (PINs) are still the most common authentication method for these devices. Unfortunately, the PIN mechanism is vulnerable to shoulder-surfing attacks performed via hidden cameras installed near the ATM to catch the PIN pad. To overcome this problem, people get used to covering the typing hand with the other hand. While such users probably believe this behavior is safe enough to protect against mentioned attacks, there is no clear assessment of this countermeasure in the scientific literature. This paper proposes a novel attack to reconstruct PINs entered by victims covering the typing hand with the other hand. We consider the setting where the attacker can access an ATM PIN pad of the same brand/model as the target one. Afterward, the attacker uses that model to infer the digits pressed by the victim while entering the PIN. Our attack owes its success to a carefully selected deep learning architecture that can infer the PIN from the typing hand position and movements. We run a detailed experimental analysis including 58 users. With our approach, we can guess 30% of the 5-digit PINs within three attempts — the ones usually allowed by ATM before blocking the card. We also conducted a survey with 78 users that managed to reach an accuracy of only 7.92% on average for the same setting. Finally, we evaluate a shielding countermeasure that proved to be rather inefficient unless the whole keypad is shielded.


Submission + - Programmer Vigilance Missing Link in Data Privacy? (researchgate.net)

rezoG writes: An Indiana University study investigated trends in programmers' perception of the sensitivity of information (such as variable names) in code snippets. Researchers assessed correlation considering level of technical expertise, also attempting to obtain consensuses regarding which code snippets referenced sensitive (PII) information, and considering whether expertise is a factor in such identifications.

Perceptions were also compared to LLM-based categorizations (the results were not promising).

Slashdot Top Deals

"A child is a person who can't understand why someone would give away a perfectly good kitten." -- Doug Larson

Working...