Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment Not surprising... (Score 1) 196 196

"researchers have struggled to create machines that show much evidence of intelligence at all."

They focus completely on logic and logic systems and ignore the required system of valuations that support the logic systems? It's like building a car with a great engine, but no frame with wheels; of course it can't go any where.

Comment Re:Please (Score 1) 360 360

Its like saying "Hey, Chevrolet, you know your customers like the radio station set to 101.9, why cant you engineer your cars to respect their choice instead of forcing your nefarious 101.5 agenda."

Yeah, but this is a Mozilla car analogy we're talking about here.

In the current 2015.7 model, release, the UX team has decided that a 5-button hamburger menu on an AM dial (and only from 1100Khz to 1150KHz in 10KHz increments) is all that's needed. Users who want to access a wider range of frequencies in the AM band are free to write an extension or purchase a third-party radio head unit.

To further improve the user experience, we remind prospective extension developers that in the Aurora channel for the 2016.1 model year, the about:config setting for frequency.megavskilohertz has been removed, along with the FM antenna. The UX team has made this recommendation based on telemetry that suggests that few drivers actually listen to FM radio, especially since the 2013.6 model, in which the AM/FM toggle switch was removed because the UX team for 2012.1 felt it was cluttering the dashboard.

Comment I was thinking of "high end" in terms of (Score 1) 152 152

what consumers had access to by walking into a retail computer dealership (there were many independent white box makers at the time) and saying "give me your best."

You're probably right about me underestimating the graphics, though it's hard to remember back that far. I'm thinking 800x600 was much more common. If you could get 1024x768, it was usually interlaced (i.e. "auto-headache") and rare if I remember correctly to be able to get with 24-bit color—S3's first 16-bit capable chips didn't come out until late-1991, if I remember correctly, though I could be off.

SCSI was possible, but almost unheard of as stock, you either had to buy an add-on card and deal with driver/compatibility questions or one of the ESDISCSI bridge boards or similar. Same thing with ethernet, token, or any other dedicated networking hardware and stack. Most systems shipped with a dial-up "faxmodem" at the time, and users were stuck using Winsock on Windows 3.1. It was nontrivial to get it working. Most of the time, there was no real "networking" or "networking" support in the delivered hardware/software platform; faxmodems were largely used for dumb point-to-point connections using dial-up terminal emulator software.

And in the PC space, the higher-end you went, the less you were able to actually use the hardware for anything typical. Unless you were a corporate buyer, you bought your base platform as a whitebox, then added specialized hardware matched with specialized software in a kind of 1:1 correspondence—if you needed to perform task X, you'd buy hardware Y and software Z, and they'd essentially be useful only for task X, or maybe for task X1, X2, and X3, but certainly not much else—the same is even true for memory itself. Don't forget this is pre-Windows95, when most everyone was using Win16 on DOS. We can discuss OS/2, etc., but that again starts to get into the realm of purpose-specific and exotic computing in the PC space. There were, as I understand, a few verrry exotic 486 multiprocessors produced, but I've never even heard of a manufacturer and make/model for these—only the rumor that it was possible—so I doubt they ever made it into sales channels of any kind. My suspicion (correct me if I'm wrong) was that they were engineered for particular clients and particular roles by just one or two orgnaizations, and delivered in very small quantities; I'm not aware of any PC software in 1992 timeframe that was even multiprocessor-aware, or any standard to which it could have been coded. The Pentium processor wasn't introduced until '93 and the Pentium Pro with GTL+ and SMP capabilities didn't arrive until 1995. Even in 1995, most everything was either Win16 or 8- or 16-bit code backward compatible to the PC/XT or earlier, and would remain that way until around the Win98 era.

The UNIX platforms were standardized around SCSI, ethernet, big memory access, high-resolution graphics, and multiprocessing and presented an integrated environment in which a regular developer with a readily available compiler could take advantage of it all without particularly unusual or exotic (for that space) tactics.

Comment So funny, but yeah, totally true. (Score 1) 152 152

The 386 box that I installed Linux on my first time around was 4MB (4x1MB 30-pin SIMMs). 4MB! I mean, holy god, that's tiny. It seemed sooooo big compared to the 640kb of 8-bit PCs, and yet it's basically the same order of magnitude. Not even enough to load a single JPG snapshot from a camera phone these days.

Comment Wow, end of an era. (Score 4, Interesting) 152 152

For more than just a couple of us here, I suspect, there was a time when "Sparc," "UNIX," "graphics," "Internet," and "science" were all nearly synonymous terms.

Simpler times. Boy did that hardware last and last and last in comparison to the hardware of today.

Well, I suppose it can finally no longer be said that the Sparcstation 10 I keep here just for old times' sake can still run "current Linux distributions." But it's still fun to pull it out for people, show them hundreds of megabytes of RAM, 1152x900 24-bit graphics, gigabytes of storage, multiple ethernet channels, and multiple processors, running Firefox happily, and tell them it dates to 1992, when high-end PCs were shipping with mayyybe 16-32GB RAM, a single 486 processor, 640x480x16 graphics, a few dozen megabytes of storage, and no networking.

It helps people to get a handle on how it was possible to develop the internet and do so much of the science that came out of that period—and why even though I don't know every latest hot language, the late '80s/early '90s computer science program that I went to (entirely UNIX-based, all homework done using the CLI, vi, and gcc, emphasis on theory, classic data structures, and variously networked/parallelized environments, with labs of Sparc and 88k hardware all on a massive campus network) seems to have prepared me for today's real-world needs better than the programs they went to, with lots of Dell boxes running Windows-based Java IDEs.

Comment As someone that works with data sets all the time, (Score 4, Informative) 143 143

here are my answers. Spreadsheets are used in several cases:

1) When you have a small-to-medium-sized dataset (100m data points) and want to do a particular set of calculations or draw a particular set of conclusions from it just once or twice—so that the time invested in writing code in R or something similar is less than the time needed just to bung a few formulas into a spreadsheet and get your results. Once you get into analyses or processes that will be repeated many times, it makes more sense to write code.

2) Similar case, when you need to work with essentially tabular database data, but the operations you're performing (basic filtering, extracting records based on one or two criteria, just handing data from one person to the next) are either so simple or will be repeated so rarely that a MySQL database is overkill and just emailing a file back and forth is easier.

3) When you are working with data as a part of a team, and certain members of the team that are specialists in some areas related to the data, or (for example) members of the team that are doing your data collections, aren't particularly computationally expert. Spreadsheets are hard for laymen, but it's doable—a dozen or two hours of training and people can get a general, flexible grasp of spreadsheets and formulae. It takes a lot longer for someone to become basically proficient with R, MATLAB, MySQL, Python, etc., and you really want those specialists to just be able to do what they do to or with the data, rather than focusing their time and effort on learning computational tools. Spreadsheets are general purpose and have a relatively shallow learning curve relative to lots of other technologies, but they enable fairly sophisticated computation to take place—if inefficiently at times. They're like a lowest-common-denominator of data science.

We use Spreadsheets all the time in what we do, mostly as a transient form. The "heavy hitting" and "production" data takes place largely in MySQL and R, but there are constant temporary/intermediate moments in which data is dumped out as a CSV, touches a bunch of hands that are really not MySQL or R capable, and then is returned in updated form to where in normally lives.

Comment How can openness lead to closeness? (Score 3, Insightful) 250 250

Because the number one thing openness generates is chaos and multiple competing claims about reality. Say, many Linux distributions, each claiming to be great, and in fact, many variants of Linux distributions often with many versions and many wrinkles, and many varations of packages, libraries, and so on.

If you want to build or customize things, openness is great. If you just one to pick something up, use it, and move on, a huge amount of confusion, overhead, and pain is involved in trying to pick the "right" version (particularly if you're unfamiliar with openness and wrongheadedly looking for the "real" version, as many early Linux dabblers were) and get it to work quickly and easily.

There is thus a huge amount of value added by anyone that quells the chaos—even in a tiny sphere or product—and that can quickly, clearly, and succinctly explain to users just what their version does, without ambiguity either within itself as an instance or over time. The nature of the beast—this value is the result of "closing the openness," if you will, means that it can't be opened, or the value will be lost.

End users want operating systems and devices that are not open systems with unclear edges that bleed into the ecosystem, but rather a single, coherent, object or product that they can acquire, use in predictable and stable ways, and then lay down once again. They want systems and devices about which books can be written (and bought, and referred to months down the road) without quickly becoming obsolete, and with the minimal risk that this book or that add-on that they purchase will fail to work becuase they'd misconstrued the incredibly subtle differences and variations in product naming, versioning, and so on.

In short, massive openness is incredibly generative and creative, but leaves in place a systems/software/hardware version of the "last mile problem" for computing. Having a fabulous network is one thing, but consumers just want one wire coming into the house, they want it to work, they want it to be predictable and compatible with what they have, and they want to know just where it is and what its properties, limits, and costs are. They are not interested in becoming engineers, the technology they use is only useful to them as a single, tiny, and managable facet of the larger ecosystem that is their life.

This "last mile problem" cannot be solved with openness in hardware or software any more than the last mile problem for wired providers can be solved by opening up all of urban geography to any comers and saying "lay all the cable you want, anywhere you want, to and from any building or system!" First off, it would result in a mess of wires (not un-analagous to what we see across much of free software's development space) and next because most consumers wouldn't be able to make heads or tails of it, much less make a choice, and they'd probably resent the complexity in their backyard and try to do away with it.

Openness leads to closedness because to the extent that openness dominates in the development and engineering space, closedness increases as critical need for carrying whatever is developed to the average consumer space, in precisely the same measure.

Comment Broken by Design (Score 1) 165 165

The U.S. form of representative democracy was set up by the "founders" to be what it is, and it is no mistake that the upper class fights tooth and nail to keep it that way. The main problem with representative democracy goes beyond the founders though (which may explain why it was chosen in the first place) and is very similar to the main problem with the economic system called communism: Both require that humans act outside their behavior patterns to reach some ideal abstraction.

Where communism insists that humans must act according to the best interests of the whole before acting in one's own best interests, representative democracy insists that a specific human act according to the best interests of the whole before acting in their own interest. The problem is that humans act according to a hierarchy that is different: They will first act in their own interest, then in the interest of their immediate group, then -- lastly -- they will act for the benefit of the larger whole. This behavior pattern is documented and proven true over time and _no_ ideal abstraction will long get in the way.

If Mr. Lessig et al. are really interested in having functional government, then we need to discuss the dumping of representative democracy for something more "functional," such as direct democracy.

Comment You have to build it before they'll come. (Score 2) 654 654

When I lived in NYC, I would have paid 3x what I was paying to use public transit. I could get to anywhere in the city, or at least within ~2-3 blocks of anywhere, quickly, easily, and efficiently—a car would have slowed me down considerably.

On the other hand, living in southern California, I wouldn't have switched to public transit if I'd been offered at $300 a month bonus to do so. It would have meant hours every day walking around on foot in a very pedestrian-unfriendly city.

Two things are required:

1) Very good coverage of the geographical areas of interest, with frequent runs, to minimize time loss walking and waiting
2) Pedestrian-friendly environs when walking and waiting

If you need to spell times and connections out on your route map—i.e. if you need more than just a diagram of where the stations are—then you probably won't see public transit use increase no matter what you do, because your public transit system just won't get the job done.

A good, workable public transit system that doesn't negatively impact lifestyles and livelihood can tell its users everything that needs to be told with a simple, pocketable list of stations (visual or textual) and a poster on the wall of each station listing what's connected there, without any reference to time.

Comment Seriously, batch renaming using a GUI? (Score 1) 267 267

UGH.

Just invest a couple of hours learning about bash and do it on the command line, it will be time VERY will invested as it will save you a LOT of trouble down the road and you will be very surprised at the sheer number of things you can automate by typing commands and just how powerfully and flexibly you can automate them.

A fanatic is a person who can't change his mind and won't change the subject. - Winston Churchill

Working...