Comment Re:ridiculous (Score 1) 730
Sure. Unfortunately, due to budget cuts the local police cannot join the meeting.
That's OK. Batman has plenty of money.
Sure. Unfortunately, due to budget cuts the local police cannot join the meeting.
That's OK. Batman has plenty of money.
No, these are bold-faced lies!.
But if you said that you are lying, then that statement is a lie, which means you speak the truth. But if you speak the truth then you are lying. But you cannot be lying because that would mean you speak the truth... ERROR -- ERROR -- NORMAN, PLEASE ASSIST....
the ipad is a AWESOME device or textbooks, reading about dinosaurs and having animations or being able to have interactive parts is incredibly cool.
Well, yes... except in this case the animation and interactivity parts would all be lies, because nobody has ever seen a dinosaur move, much less interacted with one.
For that matter, no one has seen a dinosaur in any form other than fossilized skeletal remains. All the illustration and sculpture we see featuring dinosaurs with flesh on them is also fiction. Even some of the assembled skeletons have turned out to be wrong. It's fiction based on our best reasoned guesses about how the animals probably looked, but fiction nonetheless.
If such fiction is well-reasoned, then it is not without value.
No, but if you rip out the battery it can make a great incendiary device.
Keep it up! There's no way this is ever getting old!
I'm kind of surprised nobody's mentioned this aspect, because it seems sort of obvious to me.
HP can make a *lot* of money selling for lack of a better term "open" tablets.
I am a bit skeptical about that... I mean, tinkerers can be very enthusiastic about a product that's fun to hack on - but there aren't necessarily enough of them to make it worth manufacturing a tablet for them...
The whole project lost its focus: Palm used to be a neat piece of software for a PDA. It was not bloated with a file system, with Flash, and with all this other junk that has become the primary focus of Apple.
I think it's worth bearing in mind that their OS design was a great fit for the kind of devices they could produce in the late 90s. These days, the capabilities of a handheld machine are a lot better, so what would have been "bloat" thirteen years ago is now pretty reasonable in terms of the functionality provided vs. the resources consumed.
>Using quoting syntax to provide commentary on what someone said instead of what they actually said
I guess that helps complete the illusion that you're on 4chan.
Many bans on cellphones allow hands-free, but I've always felt it wasn't because it is better to use hands-free, but because it is impossible to enforce. (If someone can back up or refute my assumption, mod him up).
Either that, or someone argued that hands-free devices made cell calls in the car safer, so they could sell more hands-free calling devices.
The use cases, options, and interfaces are different for searching programming language source files, XML files, and other text.
You make a good point, but I think a tool that forces the different data types into a single mould could still be useful, even if it can't provide all the functionality that a specialized tool would.
Right, but what is there not already a parser for in CPAN? And if you are handy with perl, what kind of comparison is difficult?
I couldn't say, honestly.
I think at that point you're beyond "Perl's capabilities" and into the realm of "capabilities of things you can implement in Perl".
This violates so many rules of the Unix philosophy that I don't even know where to begin...
I'll take this on. It's a subject that is of particular interest to me.
First of all, you have to consider whether it even matters that a tool violates "rules" of the "Unix philosophy". I mean, seriously, why assume that some system design ideas cooked up 30-40 years ago are necessarily the One True Path? Because "those who do not understand Unix are doomed to reinvent it poorly"? What if the designers in question do understand Unix? Or what if <gasp> they might actually have some ideas that surpass those of Doug Mcllroy, ESR, K & R, and so on?
Second, how does one account for tools like Perl? By many accounts it is one of the greatest Unix tools ever created. By combining the functionality and syntax of several useful tools, incorporating a rich regexp syntax, and binding it together with a general-purpose programming language, it can be a very versatile and effective tool. But it runs afoul of various "rules" as well: (I will use a star to mark the rules I don't particularly agree with)
Perl's biggest "violation", which it shares with other scripting languages, is that first one: "do one thing and do it well." Perl, Python, etc. are perfectly capable of doing a fork/exec or popen or loading a
Perl could be a contentious example - but I chose it because to me, it and other scripting languages are examples of people bypassing the shell environment, rather than augmenting it. I would go so far as so say that it's a symptom of a failure of the Unix shell environment. Much of that failure has to do with these two facets of the "Unix philosophy" which are at odds with each other:
There is nothing intrinsic about text streams that make them a "universal interface". They are only "universal" in the sense that we already have tools in place to deal with them. And yet, look at all the file formats that aren't based on some kind of text stream... Media files, for instance.
Building all our tools around the assumption that we can't make assumptions about the format of our data means that it's harder than it ought to be to "write programs to work together". Scripts may have to incorporate multiple parsing/serializing steps (and the tools need to provide functionality to support this) - or people can bypass the obstacles that stand in the way of coupling different programs together with pipes and line-processing text tools by ignoring the Mcllroy's first clause - go the Perl route, use a tool that incorporates all the functionality into one environment so you don't need to make programs work together.
Grep has issues with data blocks as well. "With regular expressions, you don't really have the ability to extract things that are nested arbitrarily deep," Weaver said.
If your data structures are so complex that diff/grep won't cut it, they should probably be massaged into XML, in which case you can use XSLT off the shelf. It's already customizable to whatever data format you're working with.
Massaging your data is a problem at least as complex as bgrep/bdiff's proposed method of using plug-ins to generate parse trees and operating on parsed data. The only difference is that the representation for the parsed data (process-native data structures in RAM vs. XML text being passed over a pipe) is less efficient.
With [operational data in block-like data structures], a tool such as diff "can be too low-level," Weaver said. "Diff doesn't really pay attention to the structure of the language you are trying to tell differences between." He has seen cases where dif reports that 10 changes have been made to a file, when in fact only two changes have been made, and the remaining data has simply been shifted around.
No, 10 changes have been made.
That's kind of a narrow view. The aim here is to find the relevant differences in the file, which is often a more useful piece of information (for the user or for the computer) than a byte-level diff result. There are bound to be cases where you really do need to see the exact difference in the byte sequences of two files - but there are also cases where it's useful to get more contextually-relevant information.
The optimum committee has no members. -- Norman Augustine