Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Up to me? (Score 1) 252

Fine! I dont need cameras. I dont need a networked Fridge. I dont need a networked lighting setting. I dont need to look up the curve of my heating over the year in the internet.

IMHO the machines should be as dumb as possible. Heating/AC should have a timer. (Oh, wait, it has that already for the last 20 years). The energy savings you can achieve by not starting your heating/AC at the same time but "just before you come home" are not so high.

So yeah. MCUs with 128bytes of ram, no network connection, and a power consumption in the muW-mW range without any OS work for me. If you really are interested in comsumption data, make a fucking SD slot - if i write 1kB every minute for 1 year, a 1GB card is only filled to 50%.

Comment O tempores o mores? Partially! (Score 1) 840

Me: Physicist who is very happy to be born 1975, and see the best of both worlds, hardware and software. I can use all devices you find in any electronics lab and used >10 programming languages.

There are several factors contributing to his impression. The time since we finally understood electromagnetism and built amplifiers/switches was very homogeneous in terms of technological development. We uses AM/FM for nearly hundred years - basically you can use the very first tube radios to receive the music from your fm transmitter you use to transmit to the radio in your car (hopefully only where it's legal!). So that means that for hundred years, you designed an antenna, a filter, maybe an oscillator and a mixer, another amplifier. In the LF-World it was even simpler. Give me an oscilloscope and a probe, and i find where the amplifier is broken. You could start learning this as soon as you learned how to read (i did).. The devices were expensive (buying a television was a big decision back when i was a child, and would reduce the monthly savings of a family below zero, or if you had a low wage you had actually to save to buy it. The same is true for audio equipment and computers (the computers i buy actually get cheaper each time). This meant that every device actually had a circuit diagram contained. You open the radio from 1920 (i found one), wonder what is broken, an find a circuit diagram inside. Our television actually had a circuit diagram with checkpoints and illustrated waveforms which you should see on the oscilloscope, and a list of parts/modules which typically would distort the signal. Yes, that was for free inside an envelope stuck inside the back cover of the device. So instead of inventing special screws, glueing things together to save the last 0.01cent during manufacturing, and only giving service manual to "selected partners" the manufacturers actually helped you maintain the value of the device. We had that television for 20 years, and it was repaired one time.

So what happens now?

a) There is a big change in technology, which is now stabilized yet, so there is not equivalent of the "standard electronics workbench", which costed $5000. There may be a JTAG standard for actually diagnosing devices, but no standard connector but usually a few spots on the PCB, undocumented. And no manufactuer actually tells you and promises you anything about it.

b) manufacturers donâ(TM)t like to give out access to software, or even diagnosis tools. Partially because of legal reasons, I suppose.

c) I make the observation that bricking by damaged firmware is a substantial fraction of the devices which really fail hard (in my case 2x embedded controllers in thinkpads and one google chromecast, and one time some embedded firmware in an ACER Laptop). One should say that statefulness is a curse if you try to fix things.

d) Taken aside a bluetooth headset and an MP3 player which were fried by a bad USB power supply, i did not observe any personal hardware failing. The only computers which i saw the hardware failing of at work were a few intel Boards with bad capacitors

e) The discrete analog part of the circuits get smaller and smaller.

So up to now this professor was pampered in EE with people who all did analog electronics as a hobby, a very homogenous group who all learned the same technologies. Now he is confronted whit people to whom this knowledge is not valuable because of the world they live in. But they are probably better in programming and fixing software, potentially even hacking the firmware of devices. So on modern devices they may actually fix more than he could. Sure, he may be able to re-solder the broken connector, but instead of asking that his students can follow him from day 1 in using the oscilloscope, he should accept that the mix of students has changed to more software expertes, as have the device functions changes software-defined functions; the EE course anyway should contain a lab-course which give you the basic knowledge. People who change their path or discover it late are valuable in any subject - i always despised the idea that technical skills are absolutely needed to *start* studying EE (or physics). I agree that handling a network analyser is still valuable, but potentially as an advanced skill, for the people actually designing the RF frontends. Anyway for that part you arent going to go far by "fixing" (Believe me, once a rf circuit trace is damaged, it's hard to give it back the right impedance by hand).

Comment weird (Score 1) 449

The central claim of Linus seem to be that there are many people out there who claim an efficiency increase by parallelism. While i agree that many people claim (IMHO correctly) a increase in the performance (reduction of execution time) within the constraints given by a specific technology level by doing symmetric multiprocessing, i have not heard many people to claim that efficiency (in terms of power, chip area, component count) is improved by symmetric, general parallelization; and nobody with a good understanding of infromation-related aspects of computation.

I am now speaking as a physicist, I find it disturbingly easy to show the opposite for many cases in the limit of ideal performing systems (that is, resource per implemented gate operation remaining constant with the number of gate operations).

Having said that, I speculate that there are reasons to introduce paralellism:

a) The performance you require can not be achieved without it. An example woulf be an FPU, or even just an 8-bit a full adder. You *can* implement it bit-wise, but you dont like to. The full adder also is an excellent example on how paralellism can increase power consumption (i.e. fast-carry-look-ahead) and resource usage

b) Your implementation simulates operations in a way in which requires a significant effort for fetching and decoding to simulated function. The extreme case of a extreme RISC processor with one bit operations and 1bit ALU only is more inefficient for many problems than the processors we use. This means that there probably is an ideal "processing power/RAM (cache)" combination, which is a function of your communication cost (i.e. bus drivers) and your algorithm.

c) From b) we can actually see that it can be extremely resonable to create non-symmetrich mutilprocessing units. For listening to a sensor signal to change, a 8-bit 1MHz Microcontroller with less than 100kGates may be an excellent choice (seen the ti430 line, from example), since it does not insist in keeping an overkill of ALU persistenly on.

d) Paralell programming is almost never used to increase efficiency (unless you really have a distributed input/output and inherent costs of collecting it), but only for these operations where the efficiency loss due to parallelism is negligible (or zero).

Comment Re:Wow. Superbad. (Score 1) 138

Maybe. Maybe not. Not sure what the effect of secod order page translation would be if you manage to trigger the loading of a module (of the first use of memory in a module) in another VM after your VM hase been loaded. If you manage to trigger the access to the modules data memory, whci normally may be unuses after you allocate ("pad") enough memory, i could imagine that you can actually kill "nearby" data (which in Second order translation would apprear physically close to you memory).

I am not saying that this is a MOV instruction into another VMs memory, but merely stating that by a educated guess, and innocent network communication you could sometime reset/clear flags or counters which may enable you to do further things.

Comment Re:I don't get it (Score 1) 85

Moreover, this task is actually so simple that if i did it for a hobby i thought first: should not take mora that 3 ttl chips to implement it, but then i thought: woulf be interesting to get the transistor count below 10 per color channel if built directly using transistors, resistors and diodes.

But only if i get the headline "guy replaces iphone, bluetooth module and arduino by 20 transistors".

Comment Wow. Superbad. (Score 2) 138

Thats an evil bug. This could even be triggered accidentally by bad programming.

But more imporant, this allows you to break your VMs memory boundaries without any restriction. If you happen to make an educated guess about the memory layout of the physical machine and the host and guest kernel images loaded, you can try to

a) manipulate the host kernel directly (that would be nearly undetectable)

b) manipulate private keys in other VMs or the host

c) manipulate other VMs memory

d) communicate between VMs

And all of this independent of any software bug. The only thing which can be done about it would be to disable the feature on the simulated guest processor which allows to manipulate the cache arbitratily (and implicitely limit running guest programs to 1 core!). Alternatively,increase the refresh rate (i remember that the refresh rate could acturally be set manually in the 90s).

That being said, i just wonder if it possible to trigger this bug from a high level language (e.g. matlab) or the JVM where the operation causing the problem could be used implicitely for some vectorized code or other operations, e.g can this bug be triggered by the voilatile keyword in Java and accessign the memory in the same way?

Comment Re:Special service available!=net neutrality viola (Score 2) 55

It is fine if i have to pay for more bandwidth/allocated bandwidth.

As long as everybody has to pay the same price for this. Because then I (as a customer or provider) can compete in a special area with google.

If only companies who can affort their own ATM networks and are powerful enough to push anybody else to give preference to their traffic, then nobody can compete.

Which is why i think a mandatory split of companies into branches and trading of bandwidth of all kinds (guaranteed, allocated, and opportunistic) on a stock marekt would be appropriate.

And actually: yes, there is traffic, which I as a consumer need with higher priority than other traffic. I just would appreciate if I have the choice.

Comment Special service available!=net neutrality violated (Score 3, Interesting) 55

IP packets had a TOS field from in the beginning. IP v6 has this again. I am fine and appreciate prioritization/TOS if:

* ISP explicitly list these classes of traffic in their Terms
* Everybody (no matter if google or a 1 person specialized SW shop) can buy priority traffic on the backbone with a specific latency/reliability class
* Traffic/Capacity is traded only trough a open market (tick exchange), with no "secret deals"
* Costs for traffic appear separately on the bills of the customers - even if the overall product is free.
* The "last Mile" is a deal between the Customer and *his* ISP. Cross financing the last mile from other businesses should be considered as abuse of a vertical monopoly.

Comment What to wonder about? (Score 4, Informative) 197

The MPC565 is pretty standard in Airospace. Has all the features you need and not more:

* Clock: in the low MHz range. Pretty easy to make transmission reliable, even if a PCB trace is damaged or the board deteriorates.

* No MMU: Why the hell would i put a MMU in a Controller which should perform identical operations over 5years-40years and has no additional unplanned tasks, and is running software which is somewhere between well tested (level D) and insane (level A). The complexity of a MMU is incompatible with ceritfying this thing as level A (critical) for any reasonable price.

* big SRAM on chip. Buffer the voltage to the processor well and it does not matter to you if the clock fluctuates wildly.

* Flash on chip. (for program storage). So you can be pretty sure that as long as your program runs, it will run well.

That being said it should be mentioned that a variant of TFTP (35years old) is the standard for Loading SW onto parts in Planes.

Comment No (Score 1) 545

Overtime needs to be paid appropriatly.

If somebody (in my case: simulation experts) do something for me, in want them to be motivated to do a good job every hour they work. If i misplanned the project and they are the only ones who can handle it, it should be on my bill/ the companies bill, not on theirs. I also want them to be not disgruntled and ready for sleeping for 5 weeks because some other PM pushed them the last 6 months to 80h/week, since I need them to have a clear mind.

Unpaid overtime gives wrong incentives, since "too good to be true" Project Management Plans are not pnished, but rewarded.

Comment Absolutely (Score 1) 310

It's ok in the game to beat up people, shoot them, drive them over with trucks, use granades in the middle of the city and help the mafia.

But heaven help, should the game allow show that Mafia people actually beat women. Everybody knows that organized crime and human trafficking are completely independen and that truely, there are 'honourable' mafiosi which just shoot among themself, without earning money from such things.

Slashdot Top Deals

HELP!!!! I'm being held prisoner in /usr/games/lib!

Working...