Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment TestFairy and only 200 users supported (Score 1) 439

From:
https://www.theverge.com/2020/...

(This is all early, so may be wrong, but it's very frustratingly stupid if this is the problem).

Rather than create an App in the proper App stores, and rather than use an enterprise distribution strategy, they apparently used the free version of Test Fairy. Which only supports 200 installs, max.

Forget whether the app was insecure, or didn't work--no one could install it. And so it was never going to work.

Comment Re:In other news: Water wet! (Score 2) 90

But it's just a USB-C connector.

A malicious USB-C anything could be created (keyboard, mouse, flash drive) that really was Thunderbolt, and there's really no way for the user to tell. This does mean you should never plug in an untrusted USB-C flash drive (unless it's through a hub which would not allow the Thunderbolt traffic) into a Thunderbolt connector. It could be much worse than getting an ordinary virus.

It also means your system may be vulnerable to unwanted searches through this vulnerability. Every time you fly internationally, customs agents can copy the entire contents of your laptop.

Comment Re:Why a warning? Robocalls are illegal. (Score 2) 276

I agree, this would appear to be an illegal robocall. I don't understand how Google doesn't realize this is a problem.

For those in favor of this, what would you think of v2.0 of the appointment robot automatically calling ALL restaurants within a 5 mile radius to book a table? And there's no reason all the calls cannot be simultaneous. And then calling back ALL-1 to cancel. Restaurants will need to hire a team to staff their phones (or, just automate it as well...this may be what Google has in mind, force businesses to automate their phones as well). And then v3.0 is competitors abuse the system: do something to get users of this product to Denial-of-Service their competitors. It's obvious that any automated call system is ripe for abuse by unsavory actors by tricking normal people to do something abusive.

And, lowly workers now have to grovel at the sounds of the automated butlers of the rich. I would expect a backlash.

Comment Re:Watched the video ...many questions (Score 1) 185

Lots of comments, but almost no one watched the video. At least this comment watched the video.

I did, and didn't like it. I am VERY sympathetic to the view that AI/ML can be dangerous/unethical, but this documentary is not well made. It has high production values, but this video plays scary music while mocking fringe characters, to get you scared of AI in general, and 5 minute montages of computer stuff with different scary music. If it dialed itself up a little bit more, it would be a good parody of itself.

Unfortunately, it's not completely terrible, it does mention some real things to be concerned about (truck drivers losing their jobs, big data means big surveillance, etc.). And then it shows some creepy uncanny valley female robot doll with some weirdo guy who invented it saying some BS, and I think we're supposed to just be creeped out. But it offers NO solutions/suggestions, it's just meant to be scary in a general way. I don't know what it wants to achieve--do we need to destroy all computers?

Let me use ML (machine learning) as a shorthand for all varieties of neural nets, since that's how non-AI people refer to it, and AI is more general, including general learning capabilities. ML exists today, and in its simplest form, it's complex pattern matching. Show a neural network a million pictures of sheep, and it learns to pick sheep out of pictures. The documentary said ML was something different, but it's wrong.

Problems with ML/neural nets: requires massive data (surveillance issues); bias in collecting data is passed through to bias in learning (google "prank a neural net" to see how an ML net that's excellent at picking out sheep in photos, when given a picture of a sheep indoors makes the ML think the sheep is now a dog, and other even funnier mishaps, since all the training photos of sheep were taken outside); still very much an "art" and not a "science" in applying it to real problems; much current ML is done in the cloud (Siri, Alexa, Google Home), which creates more surveillance; and the creepy guys on the fringe of ML (like Cambridge Analytica). Almost none of this was in the documentary.

Solutions: true data privacy, make it illegal to sell user information for any marketing purpose; make clear rules on liability for ML giving bad answers (self-driving cars; loan processing; surgery); etc.

For AI, the documentary made this point quickly, then moved on: General AI may be just 3 breakthroughs away, which may mean 20-30 years. People who say "we don't know how to build this now, don't worry!" are just distracting from the real issue. Let's not start with fear, let's start from another direction. Here's one example: would it be murder to turn off a general learning AI? What rights should a learning intelligent AI have? Doesn't it make sense to have some ethical rules for investigating a truly general AI, at least to avoid doing unethical things TO the AI itself? Note that these rules don't apply to basic ML neural nets, since they are just pattern matching. And similarly, rules to prevent a Morris Worm virus-like effect, where something gets out of hand unintentionally. Not that the main fear is Terminator wiping out all humans, but all self-replicating systems need constraints put on them somehow, or they will run into constraints on their own.

If the AI community cannot create some basic ethics rules, then expect more documentaries, blurring the difference between ML (exists today, being used today, ethical issues almost all involve human-level problems) from general AI (doesn't exist, but might in a few decades, project any fears you want onto it) like this one which scare people into doing something no one will be happy with.

Comment Facebook's First Genocide (Score 1) 96

(I reserve the right to amend that in the future if a previous genocide gets blamed on Facebook).

First, who is saying to ban free speech? Knock it off with that stupid strawman.

It's all about attention. Zeynep Tufekci writes and speaks very well about this.

There's a difference between allowing hate speech on the internet, and promoting it as the top item in your newsfeeds. The first is not the issue--it's the second. Combine that with many poor people around the world cannot access the internet, but only Facebook, then Facebook IS the internet for them, and what Facebook shows them is the problem. And people who've not used the internet before don't know to be suspicious of everything. And Facebook wants to show you stuff to outrage you, to make you read more ads. And what outrages more than "news" saying Muslims are murdering your neighbors. If that inspires Buddhists to actually murder thousands of people in revenge for things that never happened, well, that's just collateral damage. We gotta sell ads, and free speech, we're not to blame, we try to take down fake news after any massacres, etc.

Facebook is just insanely good at spreading evil. It may have other uses. I think the evil on Facebook exceeds the good, especially since there are other ways to get the good parts, and the evil is getting better as using Facebook.

Comment It's the visas (Score 5, Insightful) 268

Let's say you're in China/India, and want to work in the US.

You get your undergrad degree locally, and then come to the US to get a Masters. You then get to work for a few years on a visa (I think OPT-1), after paying for just 2 years of school. They could come as an undergrad in the US, but then you have to pay for 4 years of US school, which is not as good of a deal. This is the cheapest way to get a guaranteed work visa in the US--I would expect for some students, the schooling itself doesn't really matter, they are basically paying for the visa. And schools love it since they can get these students to pay full price for their Masters programs. The article itself mentions this visa program at the end in passing--but they miss the whole point.

Comment Wikileaks summaries are propaganda (Score 1) 94

Why aren't people paying attention? Wikileaks summaries are always just propaganda, intentionally misleading to work up conspiracy theorists. It's clever though, it's based on half-truths, but it's generally nothing in the end. They look over their info for weeks to write their summary, then dump a huge amount of info that no one can reasonably read quickly, so the media just publishes the Wikileaks summary.

Just wait a few days, the truth will come out to be something extremely boring. Ah, but who follows up and finds out the truth? This propaganda is very effective.

I think the most shocking revelation from the Clinton email leaks was Podesta's risotto recipe.

Comment Poor Survey (Score 2) 143

I clicked through to the detailed report (which was about lots of other things), and they didn't classify the results by at least iOS/Android/Windows Phone, or even better by manufacturer.

It's very possible 99% of Google and Apple device users update the OS as quick as possible, and 0% of Samsung/HTC/etc. users update (because there are none), and so this doesn't tell us anything.

Plus, I would answer "when it's convenient for me", meaning always within a day or so.

It's like they phrased questions to get results to give the most click-baity headlines. This is my shocked face.

Comment Re:Weak process improvement/Few ideas waiting (Score 1) 474

Yes, this post is right.

I'd say the #1 reason is power. Until about 1999, high-end designs could ignore power, and do crazy things that were fast, but burned power. That all has come to an end. I imagine at 14nm, you could create circuits that created over 1500W on one chip. Just imagine the fastest transistors all toggling at the maximum rate. That just cannot be cooled, or even powered. I don't know what 2000 amps to one chip would look like. So, with a limit of about 130W of power per chip (cooling and just the amps involved) has restricted chip design, even at the high end.

Note if you look at individual transistors, they are still speeding up, you just can't use them all at once. So this is the practical end of the effective law of doubling process performance every 2 years--so the overall speed improvements are much smaller.

Comment Re:RFC5961 is flawed (Score 1) 115

If you just want to know if a connection exists, the exact global rate limit doesn't matter.

Let's say we're sending 5000 packets/sec to try to probe if a connection exists. And then we send 5 to see if we got it right (whatever you think the minimum per-second global throttle rate can be). If we get >= 3 on our test channel, then the probe is not for a valid connection. It doesn't matter if the server's throttle is to 5 per second or 200 per second.

A per-connection rate limit does fix the DDOS amplification issue. Yes, there may be lots of sockets open on a server, but hitting each socket with the right traffic to generate these ACKs is hard.

Fixing this on the server by limiting each socket's challenge-ACK rate is easy: have a global counter of seconds somewhere in the OS. Have each socket hold a "seconds value of last challenge ACK". If we want to send a chellenge ACK, only do it if the last_seconds_ack != global_seconds. This limits each socket to one per second. It's not even important to have the counter be large--it can be a byte (or even one bit), since it's only necessary to try to prevent sending many within the same second. And this is just to prevent magnification DDOS, which in this case may not even be necessary since it's so hard to generate the right traffic, and these ACKs are so small.

Comment RFC5961 is flawed (Score 5, Informative) 115

I read the brief article, and read RFC5961, and here's a quick summary:

A TCP connection is uniquely identified by the combination of: Source IP address; Destination IP address; Source port number; Destination port number. TCP also has a sequence number, which helps reorder packets. It also helps prevent spoofing, but spoofing is still possible. Any computer on the internet can craft a packet to send to a Destination IP address and Destination port with all the other fields spoofed. A spoofer cannot receive a reply (the Destination machine will send any replies to the indicated Source IP address, which the spoofer cannot see).

So, it's possible to inject SYN and RST requests into valid streams, shutting down other people's connections (although you couldn't be sure you've succeeded). RFC5961 tries to prevent this by adding some cases where the SYN/RST are not treated as valid, but instead it sends a special ACK to the source requesting confirmation. To avoid denial of service, these special ACKs are rate-limited to 10 per 5 seconds. Note these special ACKs are only generated if the SYN or RST look "nearly" valid based on the sequence number, otherwise the RST or SYN is ignored.

And that's what they discovered is the problem--open your own connection to any Destination which has long-term connections (and they picked a USA Today website, but anything would work), and every 2 seconds try to get it to generate those special SYN/RST ACKs. If it's not under "attack", you'll get your ACKs.

Then, spoof billions of packets using a chosen source IP address (and loop through all sequence nums and port nums) and guess the dest port (say, 80).

Now send a little traffic to the Destination on your valid connection to try to get these special SYN/RST ACKs. If you don't get your ACKs (due to the global rate limiting), then you "know" that you've stumbled upon a valid combination of Source IP/port and Destination IP/port and sequence number, so you know who's talking to the Destination machine. If you've picked a Source Address and port number not connected, then these special ACKs don't get generated, so your slow traffic generating these ACKs will not be rate limited, so you can tell this Source IP/port are not connected to this destination IP/port.

So RF5961 turns a pesky annoyance bug into a bug where its possible to determine who's connecting to a particular website (although time consuming).

With more care, they then figure out the sequence number--and once you have that, you can do targeted data injection. (It's always possible to do blind data injection, but the chance you can accurately inject javascript is hard since the sequence number is hard to predict).

RFC5961 should not do global rate limiting--it leaks important data.

Comment Re:Design by Comittee (Score 1) 243

You would have made a terrible HP executive. Maintaining and growing a profitable business gets you nowhere--no one wants to make a person like that CEO of a smaller company.

You gotta shoot for the moon and miss, and fail upwards slightly. Succeed, and you fly upwards to CEO of a large company. Most silicon valley CEOs just got lucky at some point in their life. Too bad luck doesn't repeat like skill.

And there's no way Intel would have bought PA-RISC. They needed the cover of developing something new. Basically, any deal with Intel would have gone badly for HP, and many at HP knew it. It was a short term boost (before IA64 shipped) at the expense of long-term business. Intel at the time wanted to go after the RISC server market, they wanted some of that money for themselves, by creating chips that they could sell to HP competitors to keep HP under control. If IA64 took off, it still would have gone badly for HP, in that HP would have more competitors than ever selling the same chips they were. At least IA64 now is squarely aimed at expensive servers, and HP doesn't have many IA64 competitors.

Lots of people think IA64 was just a big fiasco done by stupid people. It was a smart ploy to crush Unix servers by Intel (who would have been happy if IA64 worked as well, so win-win). On that score, it was very successful.

Comment Design by Comittee (Score 4, Interesting) 243

IA64 started as an HP Labs project to be a new instruction set to replace HP's PA-RISC. VLIW has a hot topic around 1995. HP Labs was always proposing stuff and the development groups (those making chips/systems) ignored it, but for some reason this had legs.

The HP executive culture is: HP hired mid-level executives from outside. They would then do something big to get a bigger job in another company. A lot of HP's poor decisions in the last 20 years can be directly traced to this culture. And there was no downside--if you failed, you'd go to an equivalent job at another company to try again.

So enterprising HP executives turned HP's VLIW project into a partnership with Intel, and in return HP got access to Intel's fabs. This was not done for technical reasons. Intel wanted a 64-bit architecture with patents to lock out AMD, and would never buy PA-RISC. So it had to be new. HP was behind the CPU performance curve by 1995 due to its own internal fab not keeping up with the industry due to HP not wanting to spend money. So HP could save billions in fab costs if Intel would fab HP's PA-RISC CPU chips until IA64 took off. So, for these non-technical reasons, IA64 was born, and enough executives at both companies became committed enough to guarantee it would ship.

For a while, this worked well for HP. The HP CPUs went from 360MHz to 550MHz in one generation, then pretty quickly up to 750MHz. I thought IA64 would be canceled many times, but then it became clear that Intel was fully committed, and they did get Merced out the door only 2 years late. IA64 was a power struggle inside Intel, with the IA64 group trying to wrest control from the x86 group. That's where the "IA64 will replace x86" was coming from--but even inside Intel many people knew that was unlikely. Large companies easily can do two things at once--try something, but have a backup plan in case it doesn't work.

But IA64 as an architecture is a huge mess. It became full of every performance idea anyone ever had. This just meant there was a lot of complexity to get right, and many of the first implementations made poor implementation choices. It was a bad time for a new architecture--designed for performance, IA64 missed out on the power wall about to hit the industry hard. It also bet too heavily on compiler technology, which again all the engineers knew would be a problem. But see the above non-technical reasons--IA64 was going to happen, performance features had to be put in to make it crush the competition and be successful. The powerpoint presentations looked impressive. It didn't work out--performance features ended up lowering the clock speed and delaying the projects, and hurting overall performance.

Slashdot Top Deals

What this country needs is a good five dollar plasma weapon.

Working...