Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment trust vs respect (Score 2) 460

Scientists have earned the respect of Americans but not necessarily their trust,' said lead author Susan Fiske, the Eugene Higgins Professor of Psychology and professor of public affairs

it was only fairly recently that someone explained the absolutely crucial difference between trust and respect, and it knocked me sideways. i used to always accept the "wisdom" that trust is EARNED.

trust - literally by definition- CANNOT be EARNED.

*respect* can be earned, because to respect someone (or something) you learn from PAST experience and PAST actions, you make a judgement call "that thing (or person) did something cool [in the PAST], and i liked it."

trust - by definition - refers to the FUTURE. i am - in the FUTURE - going to give someone the power and authority to do something. i (the person doing the trusting) actually have absolutely NO CLUE as to whether in the FUTURE, regardless of PAST performance, the person will do what they say that they can do.

how on earth can _anyone_ say, "you earned (past tense) my trust (future decision-making)"????

this is how wars are started (and sustained), by people confusing past and present in relation to trust and respect.

so this is where it gets interesting, because the original article is actually making TWO completely SEPARATE and distinct statements:

1) the american public has analysed the PAST actions of scientists, and finds that those actions are [in some way] cool enough to be respected (past tense)

2) the american public has, within themselves, insufficient knowledge about what it is that scientists do - and this has absolutely nothing to do with the scientists but EVERYTHING to do with "the american public" - in order to take the [frightening!] step of placing their trust in the FUTURE decision-making of some individuals-that-happen-to-be-scientists.

i cannot emphasise enough that a decision *to* trust has absolutely nothing to do with the person or thing that you are trusting. the *decision* to place trust in someone else really *really* is something that has absolutely nothing to do with the *analysis* of whether *to* trust.

this is where people get terribly confused. they do some analysis (based usually on past performance), and then they have to make a decision. they *believe* that the [past] analysis *IS* trust. it's not!! even once the [past] analysis has been done, you *still* need to take that step - to trust.

the link between respect and trust is that it is *usually* the respect that we have for people which tips our analysis in favour of certain individuals. but the analysis is NOT respect itself, just as trust (the decision to trust) is not the same thing as respect _either_.

now what i find ironic is that it is someone with a degree in psychology that is talking about trust being "earned". if someone whom the american public implicitly "trusts" (because they have a PhD) is saying "trust is earned" then how is anyone else supposed to know the difference between trust and respect??

Comment custom coding time (Score 1) 97

i wrote a video upload and playback system for a christian-based financial advice organisation that was uncomfortable with the idea of having youtube advertising messages in direct contravention of the advice that they were giving their clients.

the "normal" way to do what you are asking would be to simply have a plugin that allows you to specify the youtube URL, and it would be embedded... this is not very hard to do, and, if there is not something out there already, consider paying a programmer to do it. they should not take very long [of the order of days].

however... if, like the christian-based financial advise organisation that i had to create an entire video upload, storage and playback system for the use of youtube is completely inappropriate for your organisation (because the videos are to be kept confidential for example) then there really isn't anything out there (i looked) and you will need to write your own.

for this task you should allocate at least two to three months, if you have access to good programmers, bearing in mind that you will need both front-end developers as well as back-end server capable engineers. one of the problems to solve (in basically reinventing youtube) is that the videos need to be converted to several different formats in order to make it possible to play them back on multiple browser engines.

if this is the path you've chosen then i can help save you some time. but please think carefully about what it is that you need. as a number of other people have pointed out you've said "i need a wiki to store videos" when actually what you _should_ have said is "what's the best way to offer people in-house training videos" and qualified that potentially with a list of options such as "my budget is $X" and "my time is Y" and "my in-house skill-set is A B and C".

Comment Re:I, Robot from a programmers perspective (Score 1) 165

Don't get me started on Asimov's work. He tried to write allot about how robots would function with these laws that he invented, but really just ended up writing about a bunch of horrendously programmed robots who underwent 0 testing and predictably and catastrophically failed at every single edge case. I do not think there is a single robot in any of his stories that would not not self destruct within 5 minutes of entering the real world.

hooray. someone who actually finally understands the point of the asimov stories. many people reading asimov's work do not understand that it was only in the later works commissioned by the asimov foundation (when Caliban - a Zero-Law Robot - is introduced; or it is finally revealed that Daneel - the robot that Giskard psychically impressed with the Zeroth Law to protect *humanity* onto - is over 30,000 years old and is the silent architect of the Foundation) that the failure of the Three Laws of Robotics is finally explicitly spelled out in actual words instead of being illustrated indirectly through many different stories, just as you describe, wisnoskij.

in the asimov series there _are_ actually robots that are successful. the New Law Robots (those that are permitted to *cooperate* with humans; these actually have some spark of creativity). Caliban - who had a Gravitonic brain - was a Zero Law Robot: an experiment to see if a robot would derive its own laws under free will (it did). and Daneel, whose telepathic ability and the Zeroth Law were given to him by Giskard. these robots are the exception. the three law robots are basically intelligent but entirely devoid of creativity.

you have to think: how can anything that has hundreds of millions of copies of the three laws be anything *but* a danger to human development, by preventing and prohibiting any kind of risk-taking?? we already have enough stupid laws on the planet (mostly thanks to america's sue-happy culture and the abusive patent system). we DON'T need idiots trying to implement the failed three laws of robotics.

Comment COM (MSRPC), Objective-C/J and Software Libre (Score 2) 54

in looking at why both apple and microsoft have been overwhelmingly successful i came to the conclusion that it is because both companies are using dynamic object-orientated paradigms that can allow components from disparate programming languages to be accessible at runtime. COM is the reason why, after 20 years, you can find a random Active-X component written two decades ago, plug it into a modern windows computer and it will *work*.

Objective-C is the OO concept taken to the extreme: it's actually built-in to the programming language. COM is a bit more sensible: it's a series of rules (based ultimately on the flattening of data structures into a stream that can be sent over a socket, or via shared memory) which may be implemented in userspace: the c++ implementation has some classes whilst the c implementation has macros, but ultimately you could implement COM in any programming language you cared to.

the first amazing thing about COM (which is based on MSRPC which in turn was originally the OpenGroup's BSD-licensed DCE/RPC source code) is that because it is on top of DCE/RPC (ok MSRPC) you have version-control at the interface layer. the second amazing thing is that they have "co-classes" meaning that an "object" may be "merged" with another (multiple inheritance). when you combine this with the version-control capabilities of DCERPC/MSRPC you get not only binary-interoperability between client and server regardless of how many revisions there are to an API but also you can use co-classes to create "optional parameters" (by combining a function with 3 parameters in one IDL file with another same-named function with 4 parameters in another IDL file, 5 in another and so on).

the thing is that:

a) to create such infrastructure in the first place takes a hell of a lot of vision, committment and guts.

b) to mandate the use of such infrastructure, for the good of the company, the users, and the developers, also takes a lot of committment and guts. when people actually knew what COM was it was *very* unpopular, but unfortunately at the time things like python-comtypes (which makes COM so transparent it has the *opposite* problem - that of being so easy that programmers go "what's all the fuss about???" and don't realise quite how powerful what they are doing really is)

both microsoft and apple were - are - companies where it was possible to make such top-down decisions and say "This Is The Way It's Gonna Go Down".

now let's take a look at the GNU/Linux community.

the GNU/Linux community does have XPIDL and XPCOM, written by the Mozilla Foundation. XPCOM is "based on" COM. XPCOM has a registry. it has the same API, the same macros, and it even has an IDL compiler (XPIDL). however what it *does not* have is co-classes. co-classes are the absolute, absolute bed-rock of COM and because XPCOM does not have co-classes there have been TEN YEARS of complaints from developers - mostly java developers but also c++ developers - attempting to use Mozilla technology (embedding Gecko is the usual one) and being driven UP THE F******G WALL by binary ABI incompatibility on pretty much every single damn release of the mozilla binaries. one single change to an IDL file results, sadly, in a broken system for these third party developers.

the GNU/Linux community does have CORBA, thanks to Olivetti Labs who released their implementation of CORBA some time back in 1997. CORBA was the competitor to COM, and it was nowhere near as good. Gnome adopted it... but nobody else did.

the GNU/Linux community does have an RPC mechanism in KDE. its first implementation is known famously for having been written in 20 minutes. not much more needs to be said.

the GNU/Linux community does have gobject. gobject is, after nearly fifteen years, beginning to get introspection, and this is beginning to bubble up to the dynamic programming languages such as python. gobject does not have interface revision control.

the GNU/Linux community does actually have a (near full) implementation of MSRPC and COM: it's part of the Wine Project. the project named TangramCOM did make an attempt to separate COM from Wine: if it had succeeded it would be maintained as a cut-down fork of the Wine Project. The Wine Project developer's answer - if you ask - to making a GNU/Linux application use COM is that you should convert it to a Wine (i.e. a Win32) application. this is not very satisfactory.

in other words, the GNU/Linux community has a set of individuals who are completely discoordinated, getting on with the very important task - and i mean that absolutely genuinely - the very important task of maintaining the code for which they are responsible.

the problems that they deal with are *not* those of coordinating - at a top level - with *other projects*.

now, whilst this "Alliance" may wish to "guide" the development of the GNU/Linux community, ultimately it comes down to money. do these companies have the guts to say - in a nice way of course - "here's a wad of cash, this is a list of tasks, any takers?"

but, also, does this "Alliance" have the guts to ask "what is actually needed"? would it be nice, for example, rather than them saying "this is what you need to do, now get on with it", which would pretty much guarantee to have no takers at all, would it be nice for them to actually get onto various mailing lists (hundreds if necessary) and actually canvas the developers in the software libre world, to ask them "hey, we have $NNN million available, we'd like to coordinate something that's cross-project that would make a difference, and we'd like *you* to tell *us* what you think is the best way to spend that money".

where the kinds of ideas floated around could be something as big and ambitious as "converting both KDE and Gnome to use the same runtime-capable object-orientated RPC mechanism so that both desktops work nicely together and one set of configuration tools from one desktop environment could actually be used to manage the other... even over a network with severely limited bandwidth [1]".

or, another idea would be: ensure that things like heartbleed never happen again, because the people responsible for the code - on which these and many companies are making MILLIONS - are actually being PAID.

but the primary question that immediately needs answering: is this group of companies acting genuinely altruistically, or are they self-serving? an immediate read of the web site, at face value, it does actually look like they are genuine.

however, time will tell. we'll see when they actually start interacting with software libre developers rather than just being a web site that doesn't even have a public mailing list.

[1] i mention that because the last time i suggested this idea people said "what's wrong with using X11?? problem solved... so what are you talking about?? i'm talking about binary-compatible APIs that stem ultimately from IDL files". *sigh*...

Comment define "customer" (Score 4, Informative) 290

from what i understand of the definition of "customer", a "customer" means "someone who is paying for a service". here, there's no payment involved, therefore there is no contract of sale. i would imagine that it's fairly safe to say that we're most definitely *not* quotes customers of google quotes.

if on the other hand these individuals are actually _paying_ google for service and are not receiving a response, _then_ i could understand.

Comment Re:Where to draw the line (Score 1) 326

there is a beautiful tale which i will share with you, which helps to explain why what Dr Stallman is doing is so important:

"the reasonable man adapts himself to the world. the unreasonable man adapts the world to himself. therefore, all progress depends on the unreasonable man".

now, if it wasn't for Dr Stallman, the average pathological corporation (see the first few minutes of the documentary "The Corporation") would take whatever it could get (and you only have to look at the 98% endemic GPL violations on android smartphones and tablets to see the consequences of non-GPL software such as android)

so if it wasn't for Dr Stallman sticking to his principles, you would probably be using a computer that crashes 10 to 15 times a day for anything but the most mundane of tasks, and was entirely outside of your control.

Comment Re:so why is intel's 14nm haswell still at 3.5 wat (Score 1) 161

You seem to be conveniently ignoring Intel's Atom and Quark lines. They're all x86 and none of them has a TDP larger than 3w.

i'm not. intel's quark line - the one i saw announced on here last year - tops out at 400mhz. it has... nothing in the way of interfaces that can be taken seriously. it doesn't even have RGB/TTL video out. however if you are right about the latest intel atom being 3w, then now i am interested! so i am very grateful for you pointing this out, i will go check.

Comment Re:so why is intel's 14nm haswell still at 3.5 wat (Score 1) 161

Here is your answer, the A20 is freakishly slow compared to anything Intel would put their name on.

Granted, you can build a tablet to do specific tasks (like decoding video codecs) around a really slow processor and some special-purpose DSPs. But perhaps the companies in that business aren't making enough profit to interest Intel.

interestingly that assumption - that allwinner is not making enough profit - is completely wrong. allwinner is now one of _the_ dominant tablet SoC manufacturers in the world. their first revision (the A10, which was a Cortex A8) actually caused a major recession in the electronics industry when it first came out, as it was only $7.50 compared to the nearest competitor at around $11 to $12. everyone *not* using the A10 at the time was left holding worthless components; contracts for supply were reneged on; the change was so quick that many factories and design houses simply went out of business.

the volumes that allwinner are shipping are simply enormous, and, along with rockchip, their nearest competitor, the tablet market is completely and utterly overwhelmingly dominated by processors of the type that you describe as "built to do specific tasks".

those "specific tasks" include "running the android OS at a pace that's good enough for the overwhelming majority of end-users".

in short, intel has a long *long* way to go before they can even remotely consider that they have a processor that can be taken seriously in this very large market, both in terms of price and also in terms of performance.

what is particularly interesting about the comment that you make is that it would seem that intel really does, just as you do, believe that "a really slow processor and some special-purpose DSPs" simply is... not enough. and, contrary to that belief, it can be quite clearly seen by the total dominance of allwinner and rockchip that "a really slow processor and some special-purpose DSPs" really *is* enough.

one of the reasons for that is because if you look at the market you find that you need:

* audio and video CODEC processing. this can be handled by a special-purpose DSP. some of these are now handling 3D 4096-bit-wide screens.

* 3D graphics. these are handled by licensing a whole range of hard macros (special-purpose DSPs) that come with proprietary libraries implementing OpenGL ES 2.0. they're good enough, and some of them are getting _really_ good.

* an (as you put it) "really slow processor" - although if you look at allwinner's latest processor the A80 it can hardly be called "slow", it's an 8 core monster - which covers the running of the general OS.

overall these processors are graded according to price: $5 will get you something dreadful but "good enough", $20 will get you something that's complete overkill for a tablet.

and you know what? the $7 1.2ghz dual-core ARM Cortex A7 Allwinner A20 is, when it's put with 2gb of RAM, actually extremely quick. i tested out 1gb of RAM running debian GNU/Linux: i fired up xrdp and i had *five* rdesktop sessions running OpenOffice and Firefox on it, onto my laptop. it didn't fall over, and it wasn't dreadfully slow.

so i think you, just like intel, are completely and entirely missing the point. and in intel's case, that means entirely missing out on a *huge* market segment.

Comment so why is intel's 14nm haswell still at 3.5 watts? (Score 0, Troll) 161

ok, so the effect of RISC vs CISC has absolutely *no* relation to power, right? so why in god's green earth is, for example, the allwinner a20 1.2ghz processor - which is still in 40nm btw - maxing out at 2.5 watts and delivering great 1080p video, reasonable 3D graphics and so on - yet intel is having to go to 14nm and, even at 14nm they STILL can't release a processor that, if you run it in a very limited configuration, is STILL listed as 3.5 watts??

there's a quad-core rockchip 28nm SoC. maximum (actual) top power consumption: below 3.0 watts. intel's haswell tablet SoC is 20nm: it's 4.5 watts "Scenario" Design Power i.e. if you only run certain apps in certain ways it *might* keep below 4.5 watts.

i really _really_ want to know why it is that intel cannot deliver an SoC that has an absolute peak limit of 2.5 watts.

Earth

Climate Scientist Pioneer Talks About the Furture of Geoengineering 140

First time accepted submitter merbs writes At the first major climate engineering conference, Stanford climatologist Ken Caldeira explains how and why we might come to live on a geoengineered planet, how the field is rapidly growing (and why that's dangerous), and what the odds are that humans will try to hijack the Earth's thermostat. From the article: "For years, Dr. Ken Caldeira's interest in planet hacking made him a curious outlier in his field. A highly respected atmospheric scientist, he also describes himself as a 'reluctant advocate' of researching solar geoengineering—that is, large-scale efforts to artificially manage the amount of sunlight entering the atmosphere, in order to cool off the globe."

Comment Re:complex application example (Score 4, Informative) 161

> the first ones used threads, semaphores through python's multiprocessing.Pipe implementation.

I stopped reading when I came across this.

Honestly - why are people trying to do things that need guarantees with python?

because we have an extremely limited amount of time as an additional requirement, and we can always rewrite critical portions or later the entire application in c once we have delivered a working system that means that the client can get some money in and can therefore stay in business.

also i worked with david and we benchmarked python-lmdb after adding in support for looped sequential "append" mode and got a staggering performance metric of 900,000 100-byte key/value pairs, and a sequential read performance of 2.5 MILLION records. the equivalent c benchmark is only around double those numbers. we don't *need* the dramatic performance increase that c would bring if right now, at this exact phase of the project, we are targetting something that is 1/10th to 1/5th the performance of c.

so if we want to provide the client with a product *at all*, we go with python.

but one thing that i haven't pointed out is that i am an experienced linux python and c programmer, having been the lead developer of samba tng back from 1997 to 2000. i simpy transferred all of the tricks that i know involving while-loops around non-blocking sockets and so on over to python. ... and none of them helped. if you get 0.5% of the required performance in python, it's so far off the mark that you know something is drastically wrong. converting the exact same program to c is not going to help.

The fact you have strict timing guarantees means you should be using a realtime kernel and realtime threads with a dedicated network card and dedicated processes on IRQs for that card.

we don't have anything like that [strict timing guarantees] - not for the data itself. the data comes in on a 15 second delay (from the external source that we do not have control over) so a few extra seconds delay is not going to hurt.

so although we need the real-time response to handle the incoming data, we _don't_ need the real-time capability beyond that point.

Take the incoming messages from UDP and post them on a message bus should be step one so that you don't lose them.

.... you know, i think this is extremely sensible advice (which i have heard from other sources) so it is good to have that confirmed... my concerns are as follows:

questions:

* how do you then ensure that the process receiving the incoming UDP messages is high enough priority to make sure that the packets are definitely, definitely received?

* what support from the linux kernel is there to ensure that this happens?

* is there a system call which makes sure that data received on a UDP socket *guarantees* that the process receiving it is woken up as an absolute priority over and above all else?

* the message queue destination has to have locking otherwise it will be corrupted. what happens if the message queue that you wish to send the UDP packet to is locked by a *lower* priority process?

* what support in the linux kernel is there to get the lower priority process to have its priority temporarily increased until it lets go of the message queue on which the higher-priority task is critically dependent?

this is exactly the kind of thing that is entirely missing from the linux kernel. temporary automatic re-prioritisation was something that was added to solaris by sun microsystems quite some time ago.

to the best of my knowledge the linux kernel has absolutely no support for these kinds of very important re-prioritisation requirements.

Comment complex application example (Score 4, Insightful) 161

i am running into exactly this problem on my current contract. here is the scenario:

* UDP traffic (an external requirement that cannot be influenced) comes in
* the UDP traffic contains multiple data packets (call them "jobs") each of which requires minimal decoding and processing
* each "job" must be farmed out to *multiple* scripts (for example, 15 is not unreasonable)
* the responses from each job running on each script must be collated then post-processed.

so there is a huge fan-out where jobs (approximately 60 bytes) are coming in at a rate of 1,000 to 2,000 per second; those are being multiplied up by a factor of 15 (to 15,000 to 30,000 per second, each taking very little time in and of themselves), and the responses - all 15 to 30 thousand - must be in-order before being post-processed.

so, the first implementation is in a single process, and we just about achieve the target of 1,000 jobs but only about 10 scripts per job.

anything _above_ that rate and the UDP buffers overflow and there is no way to know if the data has been dropped. the data is *not* repeated, and there is no back-communication channel.

the second implementation uses a parallel dispatcher. i went through half a dozen different implementations.

the first ones used threads, semaphores through python's multiprocessing.Pipe implementation. the performance was beyond dreadful, it was deeply alarming. after a few seconds performance would drop to zero. strace investigations showed that at heavy load the OS call futex was maxed out near 100%.

next came replacement of multiprocessing.Pipe with unix socket pairs and threads with processes, so as to regain proper control over signals, sending of data and so on. early variants of that would run absolutely fine up to some arbitrarry limit then performance would plummet to around 1% or less, sometimes remaining there and sometimes recovering.

next came replacement of select with epoll, and the addition of edge-triggered events. after considerable bug-fixing a reliable implementation was created. testing began, and the CPU load slowly cranked up towards the maximum possible across all 4 cores.

the performance metrics came out *WORSE* than the single-process variant. investigations began and showed a number of things:

1) even though it is 60 bytes per job the pre-processing required to make the decision about which process to send the job were so great that the dispatcher process was becoming severely overloaded

2) each process was spending approximately 5 to 10% of its time doing actual work and NINETY PERCENT of its time waiting in epoll for incoming work.

this is unlike any other "normal" client-server architecture i've ever seen before. it is much more like the mainframe "job processing" that the article describes, and the linux OS simply cannot cope.

i would have used POSIX shared memory Queues but the implementation sucks: it is not possible to identify the shared memory blocks after they have been created so that they may be deleted. i checked the linux kernel source: there is no "directory listing" function supplied and i have no idea how you would even mount the IPC subsystem in order to list what's been created, anyway.

i gave serious consideration to using the python LMDB bindings because they provide an easy API on top of memory-mapped shared memory with copy-on-write semantics. early attempts at that gave dreadful performance: i have not investigated fully why that is: it _should_ work extremely well because of the copy-on-write semantics.

we also gave serious consideration to just taking a file, memory-mapping it and then appending job data to it, then using the mmap'd file for spin-locking to indicate when the job is being processed.

all of these crazy implementations i basically have absolutely no confidence in the linux kernel nor the GNU/Linux POSIX-compliant implementation of the OS on top - i have no confidence that it can handle the load.

so i would be very interested to hear from anyone who has had to design similar architectures, and how they dealt with it.

Comment legal ramifications of identity verification (Score 1) 238

i think one of two things happened, here. first is that it might have finally sunk in to google that even just *claiming* to have properly verified user identities leaves them open to lawsuits should they fail to have properly carried out the verification checks that other users *believe* they have carried out. every other service people *know* that you don't trust the username: for a service to claim that they have truly verified the identity of the individual behind the username is reprehensibly irresponsible.

second is that they simply weren't getting enough people, so have quotes opened up the doors quotes.

Slashdot Top Deals

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...