Forgot your password?
typodupeerror

Comment Re:Monkey See, Monkey Buy Other Monkey's Copy (Score 1) 18

They would have got more value out of a version of VNC.

Perhaps, but PCoIP was a specialized protocol aimed mostly at niche markets. Graphics production for Hollywood movies was one, where leaks of pre-released materials could sink the whole project. With PCoIP, you can distribute your graphics work across multiple independent studios, and none of them actually keeps any of the assets on their own machines. They're essentially doing their high-res graphics work on thin clients.

Another market was testing for higher education, for similar security reasons. People try to cheat on tests all kinds of ways.

Teradici always kind of struggled to market PCoIP, though, because their primary "product" was really just a protocol. Their model was to license it to other companies, who then used it to build bespoke solutions for clients. There was a bunch of intellectual property behind it, but not everybody could see the value. They even considered open-sourcing it, but I don't think they were ever serious enough to get someone to consult them on how they could do that and still preserve the licensing revenue.

(Full disclosure: I spent about a year helping Teradici with PR.)

Comment Used them for work and personal backups (Score 1) 121

I used to carry a 100mb zip disk between work and home, switching between one for each working day. I had about 30 zip disks in total, and I only shredded them all about 10 years ago after copying them all onto more modern backup storage. Then I tossed the zip drives as well.

Fantastic storage for the time, never had one go bad.

Comment Re:This is like (Score 2) 42

Yep. OnlyOffice wants their hosting money. They want control. They're the assholes.

Maybe that's true, but I'm not getting that from the summary. What I'm getting is:

  • OnlyOffice spent a decade developing their office code, distributing code that they authored under a modified AGPL license that requires attribution to be preserved.
  • EuroOffice removed the attribution.

If EuroOffice removed attribution requirements only on code that was created by someone else other than OnlyOffice, and did not use the code authored by OnlyOffice, then they're fine. But I think courts have already ruled that the AGPL term about being able to remove conflicting terms applies only to someone other than the author adding those terms, so if they used code authored by OnlyOffice, they may have a problem.

Comment Re:A serious question (Score 1) 40

It's a good question and one I'm working on trying to get an answer to. By giving AI hard, complex engineering problems, and then getting engineers to look at the output to determine if that output is meaningful or just expensive gibberish.

By doing this, I'm trying to feel around the edges of what AI could reasonably be used for. The trivial engineering problems usually given to it are problems that can usually be solved by people in a similar length of time. I believe the typical savings from AI use are in the order of 15% or less, which is great if you're a gecko involved in car insurance, but not so good if you're a business.

If the really hard problems aren't solvable by AI at all (it's all just gibberish) then you can never improve on that figure. It's as good as it is going to get.

I've open sourced what AIs have come up with so far, if you want to take a look. Because that is what is going to tell you if good can come out of AI or not.

Comment Re:Employee conversation in work environment (Score 1, Interesting) 40

The conversations are not private, but PII laws nonetheless still apply. Anything in the messages that violates PII privacy laws is forbidden regardless of company policy. Policy cannot overrule the law.

Now, in the US, where privacy is a fiction and where double-dealing is not only perfectly acceptable but a part of workplace culture, that isn't too much of an issue. The laws exist on paper but have no real existence in practice.

However, business these days is international and American corps tend to forget that. Any conversation involving European computers (even if all employers and employees are in the US) falls under the GDPR and is under the aspices of the European courts and the ECHR, not the US legal system. And cloud servers are often in Ireland. Guess what. That means any conversation that takes place physically on those computers in Ireland plays by European rules, even if the virtual conversation was in the US.

This was settled by the courts a LONG time ago. If you carry out unlawful activities on a computer in a foreign country, you are subject to the laws of that country.

Comment Eric Schmidt on AI used to make bioweapons soon (Score 1) 13

From the transcript about 43 minutes in of a public conversion with Eric Schmidt from Apr 10, 2025: https://www.youtube.com/watch?...
====
          "Question: Thanks for the great conversation so far. Leonard Justin. I'm a PhD student at MIT. Um, I was wondering if you could just discuss a bit more some of the risks you see coming specifically with respect to biology and how we should go about mitigating those. What's the role of the AI developers? What's the role of government? Um, yeah, how can we move forward on that?
        ----
        Schmidt: So, so you're going to know a lot more about this area than I, but speaking as an amateur in your field, the two current risks from these models are cyber and biorisks.
        The cyber ones are easy to understand. The system can generate cyber attacks and in theory can generate zero-day cyber attacks that we can't see and it can unleash them and furthermore it can do it at scale.
        In biology, you get some evil, you know, the equivalent of Osama bin Laden. They would start with an open-source model. Now these open source models have been restricted using a testing process. Uh they're called cards and they test it out and they delete that information from the model.
        It turns out it's relatively easy to un to reverse essentially those security modes around the model and that's a danger. So now you've got a model that can generate bad pathogens.
        Then the second thing you have to do is you have to find things to build them. Our collective assessment at the moment is that that's a nation state risk, not an individual terrorist risk. Although we could be wrong, but there's plenty of examples uh and this the the report talks about some of the Chinese examples where in theory if they wanted to they could not only manufacture bad things but sorry design them but also manufacture them.
        The good news and the reason we're all alive today is that the bio stuff is hard to manufacture and distribute and to make deadly and and spread and so forth and so on. Um there's lots of evidence for example that you can take a bad bio right now and modify it just enough that the testing regimes and the sort of surveillance regimes it bypasses and that's another threat.
        So that's what I worry about.
        But I think at the moment u our consensus is we're right below the threshold where this is an issue and the consensus in in my side of the industry is that one more or two more turns of the crank these issues will be -- and you know by then you'll be graduated and you can sort of help solve these problems.
        Um the a crank is turned every 18 months or so. This is about three years.
        ----
        Moderator: But theoretically, couldn't AI and biotechnology help you come up with a counter measure?
        ----
        Schmidt: Um, I had thought so, and that was the argument I made until I I do a lot of national security work. And there's a term called offense dominant. And an offense dominant is a is a situation in a military context where the attack cannot be countered at the same level as the attack. In other words, the damage is done.
        And most people, most biologists who've worked in this believe that while the model can be trained to counter this, the damage from the offense part is far greater than the ability to defend it, which is why we're so worried about it."
====

Ultimately, I feel a big part of the response to that threat needs to be a shift in perspective like through people laughing at my sig: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity." :-)

Explored in more detail here:
"Recognizing irony is key to transcending militarism"
https://pdfernhout.net/recogni...
        "... Biological weapons like genetically-engineered plagues are ironic because they are about using advanced life-altering biotechnology to fight over which old-fashioned humans get to occupy the planet. Why not just use advanced biotech to let people pick their skin color, or to create living arkologies and agricultural abundance for everyone everywhere?
        These militaristic socio-economic ironies would be hilarious if they were not so deadly serious. ...
        There is a fundamental mismatch between 21st century reality and 20th century security thinking. Those "security" agencies are using those tools of abundance, cooperation, and sharing mainly from a mindset of scarcity, competition, and secrecy. Given the power of 21st century technology as an amplifier (including as weapons of mass destruction), a scarcity-based approach to using such technology ultimately is just making us all insecure. Such powerful technologies of abundance, designed, organized, and used from a mindset of scarcity could well ironically doom us all whether through military robots, nukes, plagues, propaganda, or whatever else... Or alternatively, as Bucky Fuller and others have suggested, we could use such technologies to build a world that is abundant and secure for all. ..."

Comment Re:who are they kidding? (Score 1) 57

I used to have a workstation that had a sliding cover for the camera. Maybe it was an SGI Indy? I forget. I think only some linux laptops have hardware covers / kill switches for camera and mic? I would *really* like such for MacBook Pro, how about a physical low-profile sideways cover / toggle switch that disables camera and mic together? As for biometrics, I was always against it. But then.. iPhone Face ID, so useful. And kind of necessary with the default settings though maybe we should just keep them unlocked for longer? And then the Macbook's finger print scanner button. Actually super useful. Mainly to get around system password prompts. And local keychain fine. But then I tried Google's passkey. Also quite useful though scary, it seems to use a passkey Apple hands out if your fingerprint works. The only thing is, if your fingerprint ever is allowed one day leave your machine (probably it has already) then your biometrics are in somebody's cloud, and in a year or two someone could deepfake it. That's the obvious part. Retinas? Don't get me started. I'm guessing it probably will be robust even after laser surgery.

Comment Not interesting yet. (Score 4, Informative) 49

It's possible that cetaceans have a true language. They certainly have something that seems to function the same as a "hello, I am (name)", where the name part differs between all cetaceans but the surrounding clicks are identical. The response clicks also include that same phrase which researchers think serves the purpose of a name.

But we've done structural analysis to death and, yes, all the results are interesting (it seems to have high information content, in the Shannon sense, seems to have some sort of structure, and seems to have intriguing early-language features), but so does the Voynich Manuscript and there's a 99.9% chance that the Voynich Manuscript is a fraud with absolutely no meaning whatsoever. Structure only tells you if something is worth a closer look and we have known for a long time that cetacean clicks were worth a closer look. Further structural work won't tell us anything we don't already know.

What we need is to have a long-term recording of activities and clicks/whistles, where the sounds are recorded from many different directions (because they can be highly directional) and where the recording positively identifies the source of each sound, what that source was doing at the time (plus what they'd been doing immediately prior and what they do next), along with what they're focused on and where the sounds were directed (if they were). This sort of analysis is where any new information can be found.

But we also need to look at lessons learned in primate research, linguistics, sociology and anthropology, to understand what ISN'T going to work, in terms of approaches. In all three cases, we've learned that you learn best immersively, not from a distance. If an approach has failed in EVERY OTHER SOCIAL SCIENCE, then assuming it is going to work in cetacean research is stupid. It might be the correct way to go, but assuming it is is the bit that is stupid. If things fail repeatedly, regardless of where they are applied, then there's a decent chance it is necessary to ask that maybe the stuff that keeps failing is defective.

Comment Re:This is pretty well done (Score 2) 109

Second, when the EU says you can verify your age without revealing your identity, they seriously mean it. I worked on the ISO 18013-5 mobile driving license standard, and its protocol is the basis for the age verification scheme (18013-5 also supports privacy-preserving age verification).

The spec contradicts itself in various places, with sections saying that the app interacts with the attestation provider only once and that the attestation cannot be reissued, and other sections implying that the attestation gets reissued every three months and that the tokens are single-use.

It also isn't clear about whether they are actually using 18013-5 or are just requiring companies to implement a few tiny fragments of the spec.

I was left more confused after reading the spec than I was before.

Comment Re:Bridge for sale (Score 1) 109

Looks like I spoke too soon. The specification massively contradicts itself. 3.4.2 requires reissuance every three months, and requires that it issue 30 attestations at a time, and that they be single-use.

That part is architecturally correct, though allowing access to only 30 adult sites per three months is dubious. And if getting a new proof requires a new request at some point, then it becomes possible for the trusted list provider, conspiring with the proof of attestation provider, to cross-correlate the timing of requests and unmask a user with high probability.

And then, there's this:

3.4.1 Issuing of Proof of Age batches Since Proof of Age Attestations are designed for single use, the system must support the issuance of attestations in batches. It is recommended that each batch consist of thirty (30) attestations. Since the timestamps in the ValidityInfo structure of the mdoc encoding of a Proof of Age Attestation can provide linkability clues, the Attestation Provider should set these timestamps with a precision that limits the linkability information. For this reason the ISO/IEC 18013-5 recommendation should be followed, i.e., setting the hh, mm and ss information to the same value on each Proof of Age Attestation.

So you still have a value that is potentially usable for tracking across multiple websites. It's just a timestamp. I'm not sure if I'm reading what they're saying correctly. If they mean all 30 in a batch have the same value, this is a disaster. If they mean always set the value to 00:00:00 so you get only one day of precision, that's better than nothing, but when the request comes from an area with low population density, it is still potentially inadequate for anonymization.

I can't make heads or tails of this specification. It contradicts itself in too many places, and it buries you in minutia while lacking a clear overview. It's the kind of spec only a bureaucrat could love, because it is perfect for verifying compliance, but makes it nearly impossible to quickly verify that the spec makes sense. It lacks a section on threat models and how it addresses those threats, which is the first thing I'd expect to see.

At this point, I have no idea whether this protects privacy or not. And that's perhaps more disturbing.

Comment Re:Bridge for sale (Score 2) 109

I sure don't believe the "completely anonymous" part.

It is possible, in theory. But calling this "completely anonymous" is hopelessly naïve, IMO, unless I'm missing something *huge*.

Announcing that this is "technically complete" is laughable. I have not seen a single public white paper on the subject. We should have seen years of back and forth between academics, crypto experts, operational security experts, privacy experts, and other groups, as they all tear apart the design over and over again until it is refined into something that actually provides the claimed anonymity.

The lack of this public discourse leads me to the inevitable conclusion that it almost certainly provides the illusion of protecting privacy, while in fact massively violating it to a greater degree than ever before.

And sure enough, I started skimming the technical specification, skipped the whole first section, which was mostly justification, and almost immediately found a fatal flaw.

Unless I'm missing something, this is a show-stopper, and points to the entire architecture being fundamentally unusable:

2.2.3 Revocation and Re-Issuance

In its current form, the solution does not support revocation or re-issuance. Adding support for these features would introduce additional complexity, which could hinder the rapid adoption of the solution.

What this means is that a user gets a magic token that proves that the person is of a particular age, then submits that token to sites for verification. Here's a list of problems with that approach:

  • The same attestation is sent to every site. So the fingerprint of that certificate becomes the *ULTIMATE* tracking cookie. Every adult website will effectively know who you are. They won't know precisely who you are, but they will be able to correlate activity across sites, target ads to your specific behavior across multiple sites, etc.
  • It is impossible to regenerate that token, so once your privacy has been thoroughly raped and random websites are showing you adds for hardcore porn, you can never turn it off.
  • As soon as you pay for anything with any of those adult sites, your identity is now known, and can be correlated with your activity across all adult sites.

Using the words "privacy rape" to describe this technology is not nearly a strong enough statement, but it is the strongest phrasing I could come up with.

Protects anonymity, my ass.

About the only good thing that can be said about this is that because they didn't specify minimum requirements for storage protection, chances are it will get hacked in the first week, and a few adult users' attestations will show up on the dark web and will get used by a few million underage users' devices, making it useless as proof of age, and hopefully resulting in the folks who thought this approach was adequate quickly shutting it down.

Like I said, give us a public comment process, articles published in multiple reputable journals, etc. and in five to ten years, this will be ready. It's not ready. It's not close. It's not even in the right ballpark.

For this to be completely anonymous, it must not be possible for a government actor with control over infrastructure to perform timing attacks on anonymity, e.g. user requests auth token from government, government knows who that user is, government sees unencrypted DNS request to porn side ten seconds earlier, correlates the requests.

Doing this correctly is genuinely really, really hard. You need:

  • A different token sent to every site, with no common data that can correlate accesses across multiple sites.
  • No ability to correlate the timing of the user's request for proof and the timing of a user connecting to a website.
  • No ability to correlate the timing of the user's request for proof and the timing of a verification request from a website to the verifying authority.

This starts by the verification authorities outsourcing the verification to the "RP" (relying party). Public keys used to verify the signature. That way, the government entity doing verification does not have any record of verifications to correlate with requests.

This continues with the client queueing up a thousand or so pre-signed certs from the signing authority, and requesting replacement certs on a time-based schedule (once per day, with randomization of the replacement rate, with the client silently discarding excess certs so that you maintain a consistent pool size).

This is a starting point. I'm not saying that these things are sufficient, just that they are necessary.

Slashdot Top Deals

Outside of a dog, a book is man's best friend. Inside of a dog, it is too dark to read.

Working...