Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. ×

Comment Re: Why is Amazon/Alexa even saving recordings? (Score 1) 109

Not quite true. The hardware detects a simple sequence of phonemes that might be Alexa. It then wakes up some software to try to parse the word. The data might still be shipped off to the cloud service for spurious wakeups. Names like Siri and Alexa are intentionally designed to have sequences of phonemes that don't appear commonly in English to minimise this.

Comment Re: Why is Amazon/Alexa even saving recordings? (Score 2) 109

I don't particularly worry about Amazon intentionally violating privacy with Alexa, but when you have something like that it's a wonderful target. The mute button is entirely software, so there are all sorts of things that an attacker can do if they compromise either an individual machine or the Amazon software update server. For example, it would be a trivial patch to make it stream the audio to a different cloud service when you press the mute button. Those thousands of people working at Amazon on Alexa also make it relatively easy to sneak someone into the company to exfiltrate user data. Even if their software is entirely bug-free, what happens when someone manages to do a dump of everything that Alexa has learned about a few million users?

Comment Re:R&D (Score 1) 103

Apple does a lot of Research that isn't directly product-oriented, too; a quick look at their patent portfolio will show that.

Sorry, no. It may not be tied to products that they're currently shipping, but there's a huge spectrum between initial idea and final product, and Apple has far less investment towards the idea end of the spectrum than any of their major competitors. By the time you can patent something, it's already towards the product end (and have you actually looked at the Apple patent portfolio? They patented a more efficient take-away pizza box, for example, which doesn't really tell you anything about pure research spending).

But if you think that R that is D-oriented doesn't "count", you are nothing but an intellectual effete.

It doesn't count because it's playing accounting games. The line between development and product is very blurry. Apple classifies a lot of things are R&D that other companies count as product development. This inflates Apple's R&D spending on the balance sheet, but means that you can't really compare. R&D is a pipeline and things always have to start closer to the pure research end. Most of Apple's R&D is building on pure research done by other organisations. This has changed a bit recently (particularly in machine learning), but they're still a long way behind most other big tech companies on research spending. Microsoft, until they restructured MSR a year or so ago, had the opposite problem: they were spending over $5bn/year on research and turning very little of it into products. Neither extreme is particularly healthy for a company. You need the research end to feed the pipeline, but then you need the pipeline from research to product.

Disclaimer: I work in a university and collaborate with Apple, Google, and Microsoft on several projects.

Comment Re:R&D (Score 1) 103

Apple spends serious coin on Research and Development; far more than their competition.

This is almost true, though the vast majority of Apple's R&D funding is firmly at the D end of the spectrum. IBM used to spend a lot more than Apple on research, though they've cut down a lot. Microsoft still does (around $5bn/year on MSR). These companies and Google (and Oracle, and so on) all throw grants at universities for research, which Apple doesn't. It wasn't until last the last few months that Apple even published any of their research.

Comment Re:AI Snippets... (Score 1) 326

In this respect, it's not really any different from stuff genetic algorithms have been doing for decades. If you have a set of executable tests that can tell if the algorithm is working correctly, then you can evolve something that will pass the tests. Of course, you have absolutely no idea how it will behave on inputs not covered by your tests.

Comment Re:sign of decline (Score 1) 103

Sometimes. Apple already has their 1 Infinite Loop building and then most of the office buildings nearby along De Anza and a few nearer the middle of town. They're pretty short on space. It makes sense for them to be building a new big building, and the cost difference between building a new boring building and a new shiny building is pretty small. This will let them move a bunch of people who need to collaborate into offices near each other, rather than having them spread across the various De Anza buildings.

From what people were saying when I was at Apple a couple of weeks ago, it's actually coming a bit too late. The company has grown faster than they expected when they started planning and so rather than being able to move everyone from De Anza into IL2, they're having to identify sets of people who need to collaborate and move them, leaving quite a few behind in De Anza. If your company is growing faster than your ability to build office space to house them, that's generally a good sign (though the insane planning permission situation in the Bay Area means that it happens there a lot more often than you'd expect).

Comment Re:will probably take off with next gen hardware (Score 1) 149

Hololens is not VR

Indeed. AR doesn't seem to trigger the same motion sickness responses as VR, because you retain all of the visual cues from the real world.

Microsoft is once again creating a product that nobody will use.

Microsoft has created a technology that anyone can use without feeling motion sick, but you think that it will lose in the marketplace to one that about 80% of people can use without feeling motion sick? That's an interesting perspective.

Comment Re:It's just too expensive for the hardware (Score 1) 149

It's not so clear with 3D. It's something of a misnomer to call current displays 2D and this kind of VR interface 3D. Both provide a subset of the dozen or so cues that the human brain uses to turn inputs into a 3D mental model. They both, for example, manage occlusion and distance blurring, but neither manages (yet) to correctly adjust the focal depth of parts of the image that are further away. Motion sickness is caused by disagreements between some of these cues and between the other cues that you use to build your mental model of the world. VR adjusts the image based on your head position (though latency here can cause problems as the visual signal and the inner ear signal come at different times), but it turns out that humans have a very strong notion of body image, so if they don't correctly track your arm positions and update them in the game then this causes nausea in a lot of people.

Unfortunately for the 3D film and game industry, it's not the case that simply adding more cues reduces the risk of motion sickness. It turns out that a third-person perspective on a 2D display is one of the minima for the percentage of the population to experience motion sickness. Move to first person, and this gets worse, though it's still a tiny percentage (some people can't play FPS games for more than a few minutes without feeling sick). Add a few more visual cues and you get a lot more people feeling sick. There's obviously a minimum when you get all of the cues right, because otherwise people would spend their entire lives experiencing motion sickness, but so far none of the mainstream 3D systems have found another point that's close to the 2D display. If you're going to develop a first-person game, and you can either develop it for a technology that 99% of humans can use without feeling sick, or spending more money to develop it for a technology that 80% can use, which would you do?

Comment Re:Oh please (Score 2) 203

The type of "hello world" is const char *, so your compiler should warn that you're dropping the const in an implicit cast (and if you're a sensible person and compile with -Werror, then your compiler will reject the code). You can get the behaviour that you want with:

const char s[] = "hello world";

This will copy the contents of the string literal into a mutable array. If you write this at the global scope, the copy will be done at compile time, so you'll end up with the string in the data section, not the rodata section (if you do it in a variable with automatic storage, you'll get a copy every time the variable comes into scope). Putting constant strings in the rodata section is important for optimisation, because it allows them to be coalesced. If you write "hello world" in two place, then you'll end up with a single string in the rodata section. With some linkers, if you also write "world" somewhere else, then you'll just get two pointers into the same string (this is also one of the reasons that C uses null-terminated strings: you can't do this with Pascal strings, and it saved a useful amount of memory on the PDP-11). Once you're sharing the string, it becomes a really bad idea to allow someone to modify it, because that modification will then become visible in a different bit of code.

Slashdot Top Deals

Stellar rays prove fibbing never pays. Embezzlement is another matter.

Working...