Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:Cliche, but true... (Score 1) 580

Yes, also: if some intern was fixing a problem with your code, and they were sitting next to you, what sort of things would you have to explain to them in order for them to understand and fix your code, and not just dis' it to their boss and tell their boss what a load of crap your code is?

I often imagine I'm explaining to an intern how the code works when I write my comments. I assume they understand the language, the libraries and stuff - they can just Google that stuff - but what does this class do? why? Target audience.

The other possible target audience is myself in a year's time.

To be honest, the longer I code, the more these target audiences converge on writing the same set of comments ;-) since I've learned how much one forgets when one comes back to the same code two years later.

Comment Re:Where is your proof? (Score 1) 580

> Well that's not in the least psychotic.

True, but no-one (few people?) enjoys debugging other people's work. They'd rather be writing their own green-field implementation using the latest cool technologies, .Net 3 or .Net 4, or Ruby, or Python 2.6 or whatever. Who wants to be debugging some COBOL stuff, or something written in C? Answer: very few people.

So, when people are debugging other people's code, they (we) often would rather choose to dis' the other person's code, write it off as unmaintainable crap, so that we can justify it being replaced by something 'beautiful', using modern technos.

But we can't ethically justify that, since, well, it's more expensive, and time-consuming to write it from fresh than just to maintain the old stuff for another few years.

But we'll look for any excuse we can to make it more 'ethical' to justify that, and someone's comments, or lack of comments, are fertile ground for such 'justification'.

So, when it's our own code that is being ripped apart by some intern who'd rather write their own code, the easier it is for the intern to understand the code, and understand our comments, the fewer excuses for said intern to bash our code, and the easier it is to fix the issue, and move onto something more interesting for them. It's win, win: they fix the problem quicker, and our code doesn't get dissed. Hopefully. Not so much anyway...

Comment Re:Blame Intel... and the manufacturers... (Score 1) 394

I have an EEEPc 901, 9 inch screen, as my main (only) pc for 12 months now, using it about 8 hours a day.

On this tiny pc, I can:
- compile SpringRTS, write AIs in Java for it, run Eclipse in parallel, and mysql or postgres
- run Eclipse + Glassfish
- run apache + php + postgres/mysql
- watch videos
- use chat, Skype, email, read slashdot
- run VirtualBox, run multiple OSes in parallel
- read books

If I need a bit of extra juice, I can just ssh into Amazon EC2, for trivial amounts of money.

It weighs the same as a book, it's always with me, at all times. It's awesome.

Comment Re:Radio Shack (Score 1) 430

To be fair, this can be partially seen as a reflection of market demands.

In China, there are many electronics buildings, with several floors of tiny family-run stores selling resistors, capacitors, op-amps, and so on.

The floors are structured like this:
- ground floor, mobile phones, and more mobile phones. This is where most people go
- second floor, computer accessories: portable hard drives and so on
- third floor: electronics: resistors, capacitors and so on

Why sell a thousand resistors for a couple of dollars, making a margin of what 10 cents? ... when you can sell a mobile phone and plan for several hundred dollars, with a markup of half that?

Comment Re:HP didn't make the list? (Score 1) 430

You know, I used to create things I thought were cool, and think I was performing some essential task for society.

Then I took a break, and looked around, and realized that when I didn't do some specific task, other people would do it instead, and they'd often do it better than me: better designed, better documented, better marketed, more elegant, more beautiful.

Who's to say that if Compaq hadn't done that, either someone else wouldn't have done the exact same thing, or someone else wouldn't have done something similar, *but better*?

There's a lot of crappy stuff in bioses (chs and all that stuff), and in x86 architecture in general. As for any other architecture of course. Still, the point is: maybe if Compaq hadn't cloned the IBM bios, someone else would have made some other architecture into the commodity architecture of choice for the next twenty years, and maybe it would have been better in some subtle way?

Comment Re:Zero warning (Score 1) 162

This seems dangerously close to issues with Wittgenstein's thoughts on categorization, or more explicitly: let's say you determine that the liver cannot for some reason survive more than 60 minutes of being less than 35 degrees celsius, but then what next?
- what if you transplant the liver with an artificial liver? Does the rat still contain the essential characteristics of 'rattiness'?
- what if the rat is spectactularly fat, and so it takes longer for the liver to go below that temperature? Does that mean the rat is no longer a rat?
- or long fur?
- etc ...

A generally accepted way to 'prove' negative hypotheses is to prove them at a specific statistical certainty level, eg 99% or 99.9%. This has obvious flaws, but it's generally useful, as long as one keeps in mind its limitations, eg the file drawer problem, publication bias http://en.wikipedia.org/wiki/Misuse_of_statistics

Comment Re:TCP/IP is a cloud we trust (Score 1) 93

Many banks use multiple layers of security for data traversing WAN links:
- the WAN link itself is supposedly secure and encrypted intrinsically by the provider
- vpns run over the wan links. All traffic runs over these vpns
- data is forbidden from being sent in clear, even though it's running over a vpn. ssh et al are used to secure data that traverses

The advantage of layering is:
- if one layer of security fails by accident, the data is not necessarily compromised
- if one layer of security fails by design or intrusion, the data is not necessarily compromised
- no one person or group has the power to access the data from everyone, ie segregation of responsibility, ie the network team can, yes, get access to all network data, but it's all mandatorily encrypted by the application teams anyway....

Application teams can obviously see all their own data unencrypted, but they cannot see the data from other teams, since each team has encrypted their own data.

Now... moving onto the cloud. There is as far as I can see it very little room for layering:
- all data is available in ram in an unencrypted form
      - an attacker with access to the physical vm host can read arbitrary data from the ram of executing guests
- the network adapter of the virtual host is bridged directly in many cases to the public internet, but even when it is connected to a cloud-provided vpn, or uses its own vpn set up by the guest's company, the number of layers is significantly smaller than a server safely tucked away in a secure data center somewhere behind multiple layers of firewalls, dmzs, enterprise intrusion detection devices and so on...
- the block storage itself (EBS for example) is just a few steps away from a potential attacker: yes, EBS is in theory wiped to zero by Amazon, and yes one can run encryption over the top of the EBS, but still, that is only two layers. What if the wipe gets turned off one day without the guest company knowing? What if the guest's SA forgets to encrypt the volume for some reason?

I imagine that none of these problems are insurmountable, but one can see why large corporations would be reticent to move their sensitive servers, or even not so sensitive servers, onto publically available cloud servers?

Comment Re:Thanks Mark (Score 1) 163

Yeah, you can do something similar in debian too, at least you can put that in the /etc/network/interfaces file with a very similar syntax.

However, if you want to do anything other than connect to wpa, say: connect to a legacy wep system, or connect to an open wifi point, then that works less well.

Also, using wpa_supplicant directly is portable across other linux distributions, such as Redhat, whereas using the /etc/network/interfaces file is not.

Comment Re:Thanks Mark (Score 2, Insightful) 163

You can do that using wpa_supplicant. It's less scary than it sounds. Get rid of the existing wpa_supplicant process from /usr/share/dbus-1/services, then run:

wpa_supplicant -D wext -i ra0 -c /etc/wpa_supplicant/wpa_supplicant.conf ... where ra0 is your wifi interface, and wext is invariant.

Add your networks to wpa_supplicant.conf ('man wpa_supplicant.conf').

You can control it and see what it's doing using:

wpa_cli -i ra0 status

I do agree with you though. Here are my thoughs on commandline vs gnome: Windows vs linux 'everything in linux can be scripted -> not really'

Comment Re:Seriously? (Score 1) 296

I basically don't bother providing non-Windows support any more. It's not that I don't want to, but if I spend all my time using Ubuntu, I know all the little 'gotchas' and workarounds to get wifi in Ubuntu working, so when someone comes to me with Windows with a broken wifi, I just kind of click around, turn the wifi on and off -> no clues.

Put in a Live Ubuntu usb key -> wifi works.

Show them the internet works just fine with my 'magic usb key'. Pull it out, walk away :-P

Or put it this way: my girlfriend uses Ubuntu on her pc, and she's just fine with it. It has an internet browser (firefox), can watch videos (vlc; flash plugin), can handle javascript, and she can read pdf files (evince) and read/write word documents (openoffice).

I kind of think this is how linux will end up in the hands of 'ordinary users': because if they want to count on the support of their geek friend they might find it much easier to get such help if they're running something that their geek friend is happy to support for free.

Comment Re:Those onion belts are going bad (Score 1) 496

I felt BAG's post was pretty insightful to be honest. I feel that one could argue that the entire Facebook interface is a type of programming language in some ways. Not Turing complete admittedly. You can argue semantics over what is an isn't a programming language. My feeling is: if you can communicate to the computer something that you want done, and it does it, whatever the semantics that's a pretty cool thing to happen. Can Facebook users communicate what they want to do to Facebook? They seem to do quite well at that I feel. Does Facebook do roughly what they want as a result? Sometimes ;-) but more seriously: yes, I feel that Facebook does exactly what people want.

The 'figuring out what people want' can be implicit in the design of the programming language, and I think that fits in fairly closely I feel to what BAG was saying in his post?

Slashdot Top Deals

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...