Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment I doubt it (Score 1) 347

Based on the wording, he's comparing watching it on a given screen equal to watching it in a movie theater. That is, you don't get to keep it. Watch once, that sort of thing. Maybe a netflix model. At $4 bucks, 10 years from now, for a large screen tv, it sounds like it's some sort of rental, like the holy grail of DRM has promised the MPAA folks; they can only watch it _x_ times, or only until date _y_.

Of course, like all models that revolve around these sorts of limitations, you need to implement increasingly restrictive DRM, enforced by both software and hardware, and it can't have any holes or alternative routes. We all know how well that works. We've seen exactly how well it works. People, by and large, aren't in a big rush to adopt hardware for features that only benefit copyright distributors at the consumer's expense.

My guess is that in the best case, they'll end up partnering with cable companies and/or netflix to have some sort of ala carte channel model with a monthly subscription fee. Direct digital distribution is unlikely because they won't be able to set a price point that makes sense to the public - their price point will be based around the concept of giving up control completely, because once one person 'hacks' it, it free.

Comment Mavenized projects are doing this to me. (Score 1) 230

I won't run down the whole story, but my last company had a completely horrible, 100% custom, python based build system for a very large product that contained hundreds of subparts. Despite that - or perhaps because of it - I was able to easily get all the active code running from an eclipse instance, meaning that testing a code change usually required no more than republishing to an eclipse tomcat instance. You could pull the previous day's changes from all the other devs and in about a minute or two, have some 200 components fully uptodate and deployed locally. All very nice.

Then some folks came in and mavenized the whole thing, completely ignoring the concept of even using eclipse. Now, your only real choice in getting things to work is to make your changes to your one component, build up the jar, war, or installer for it (depending on the complexity of your change), and then overwrite your installed instances. Then you can start everything up and attach a remote debugger, with all the limitations that provides.

Of course, that's only if you're touching a front-end component that doesn't get client-side customizations. If you made some changes in a shared library, you now have to recompile the whole project and frankly, it's easier to run the installer than try to make sure you get each dependent jar everywhere it's used in the system. The compile takes between 40 minutes to an hour with tests disabled. 3+ hours otherwise. You still are stuck with the remote debugger instead of a local tomcat instance because there's no maven plugin to produce an eclipse web faceted project with all the libraries and dependent projects properly crosslinked. It's painful enough to try to set up the 30 plus dependencies and smattering of configuration settings required on one component, but then the next change comes through and I have to do it all over again - and that's only if the change introduces a dependency breaking change, or else I won't even know to update things in the first place.

Comment Sounds like my old comp-sci professor. (Score 5, Insightful) 237

I remember he used to lament the fact that we had to use computers to run programs, because they were always so impure. Hacked up with model-breaking input and output considerations. He loved APL. Had us write our programs as math equations and prove that they had no side effects. On paper. Step by step, like how elementary teachers used to have you write out long division. He was a computer scientist before they HAD computers, he'd point out.

To be fair, APL was a wonderful language, and perfect so long as you didn't want to actually /do/ anything.

Well, that's unfair. As long as you meant to do a certain type of thing, these languages work out fairly well. The issue is the old percent split issue you normally see with frameworks and libraries - by making it easy to do some percent, X, easily, you create a high barrier to performing the remaining percent, Y. The problem with adhering to pure functional languages is that Y is not only high, it's often the most common tasks. Iterating, direct input and output, multi-variable based behavior, a slew of what we'd call flow conditions - these are very hard to do in a pure functional language. The benefit you get is far outweighed by the fact that you could use C, or the non-functional aspects of OCaml, or some other so-called 'multi-paradigm' language to fix the problem in a fraction of the time, even with side-effect management.

Then, have you ever tried to maintain a complex functional program? There's no doubt you can implement those Y-items above. The problem is that it makes your code very specific and interrelated as you're forced to present a model that captures all the intended behaviors. It's a lot of work. Work that will then need to be repeated each time you need to make additional changes. Adding a mechanism to - for example - play a sound at the end of a processing job based on the status - that's a line of code in most languages. Not so in a functional language.

The problem here isn't the oft-cited 'Devs just have to think of things differently, and they'll see it's better.'. It's more basic. It's simple throughput. Functional languages might be a theoretical improvement, but they're a practical hindrance. That, in a nutshell, is why they're not in common use in a corporate environment, where "value" loses it's theoretical polish and is compared to hard metrics like time and cost for a given quality.

Comment We knowingly discriminate based on analyitics (Score 1) 231

Another problem is that many of these analyses - let's assume they're generally accurate and not misleading - could result in numbers that are not politically correct. Like the sort of statistics that might drive law enforcement policies, or new laws targeting certain lifestyles or races. I don't know how we can differentiate between statistical analysis driving action and action born of discrimination, but simply ignoring the issues is not the correct decision.

There are whole sets of these sorts of problems with certified causes and guaranteed solutions, but we're not even allowed to talk about it because they are almost wholly restricted to the behaviors of specific minority. It's simply not politically safe to grapple with these issues. So we ignore them. Strike that - we actually go a step further and demonize the people who say them. My personal experience is that if a white male says something like this:

"National crime rates show that low-income black males between the ages of 15 and 34 are responsible for over 50% of the violent crime in America, most in urban environments. If we want to lower the crime rate, we should start there." ... people will start mentally - if not verbally - tossing around terms like 'neo-nazi republican' and 'racist', and much much worse. It doesn't matter if that's what the statistics show. It's simply not politically correct to point out minority groups have negative traits, especially by white males - the majority of our politicians.

Then we have the deliberate ignorance by politicians which focuses on emotional appeals in popularity and re-election bids. For example, attacking scary looking rifles and calling them 'assault weapons', over the weapon of choice in something like 98% of the crimes involving firearms in the US: sub $400 handguns. It's just not as sexy, somehow. Not to mention that a fix like simply raising the price via tax - like we've done with cigarettes for example - will have a backlash claiming it's targeting the poor.

Yes, privacy is an issue, yes, poor input will result in inaccurate trend predictions or incorrect results, and yes, even something like the manner in which the data is collected can introduce a bias. However, I think these are minor issues compared to the glaring problem of not using the data once it's obtained. We have such a poor track record for making rational, data-driven decisions as a country that we don't need to speculate about misuse. The fact that we obtain it and then ignore it is misuse!

Comment Re:Correcting for aspirations (Score 1) 302

Yes. I've been beating this drum for the last 5+ years, but there have been studies along the lines of motivations. At least one that I can remember; The study was on entrepreneurs - so no glass ceiling issues - and the summary was this: Women outperformed men in every case where they were compared on matching motivational goals.

So, between those interested in money - women made more money.
If family time was more important - women spent more time with family.
Where flexible working hours or vacation time ... you get the idea.

Of course, for the large part, women prioritized personal happiness, short commutes, family time, pretty much everything except money, whereas men prioritized money and professional recognition almost exclusively.

Unfortunately, the study is not online - at least, I haven't found it - I believe it was in a trade journal for industrial psychologists, or maybe just a business management magazine.

There are more items to throw into the mix too. As someone pointed out, women are not entering college programs where the degree is associated with higher financial income. Yet studies show that when they do, they often advance faster than men in the same position[1]. This is referred to as career discrimination - it's a self-selective discriminator meaning the person affected is the person making the choice. It's responsible for the largest source of wage disparity.

Career stability is another factor - someone else brought up that women are more likely than men to take time off for family, having & raising children, and so on - and thus have employment gaps which indicate family is more important than career. Not that this is bad, just that it does affect hiring decisions. This is why top execs (not surprisingly, those who make the most $$$) are rarely female. Not every woman is Marissa Mayer (Yahoo's CEO who went back to work with almost no leave after giving birth). When choices are made, you pick the one who places the business over their family, and 1-3 year gaps show that's not the case.

Then there's a whole other set of things; men being more willing to negotiate salary, self-selective bias towards culturally expected roles & professions, so on and so on.

When all is said and done, we still have some wiggle room, but most of the 'discrimination' can be explained due to non-discriminatory sources. There IS still discrimination however, make no mistake, but in the US today, it's easily offset by current affirmative action-promoted hiring standards and the apparent ability for women in general to outperform men - when they choose to.

If the goal is equality, I suggest that we're already there. If the goal is to eliminate discrimination, I think it's not only more challenging, but will necessarily require eliminating gender- and race-based decisions from things like college acceptance and hiring decisions. Obvious you say? It would mean no more affirmative action, as that provides advantages based on gender, race, sexuality or other minority status, which is literally discrimination based on those lines.

[1] - Yes, this could be due to affirmative action more than ability, the studies tend to focus on large companies with high data counts, and those companies tend to desire an appearance of gender and other forms of equality, as well they should. However, the first study I commented on indicates that women outperform men, so I'm theorizing this is earned promotions due to ability.

Comment We've already been through this (Score 1) 189

A pretty comprehensive study was done that showed regardless of the language selected, the programs written by developers with the most experience in that language were the the ones with the least security bugs, regardless of traits of the language like attack surfaces or complexity.

It's short and sweet. A developer's experience with the specific language or framework an application is written in is a better indicator of application security than the language or framework an application is written in.

Comment Isn't there a better way to do this? (Score 1) 65

I've seen the various design concepts before and they're all variations on the same intrinsically flawed theme; displays projected on either a liquid or gas that requires very still air, and a very irritating environmental system to manage, not to mention an image that is disrupted when a user 'interacts' with it because it's interrupting the 'canvas'.

I don't know of any scheme that could avoid these fundamental problems that will stop this from ever being a widely useful, much less consumer level technology.

I think we're just going to have to stick to visual overlays on 3D space, augmented reality style, at least until we can actually produce the sci-fi concept of projected holograms. Anything less is simply not useful enough.

Comment The theoretical & practical hurdles of 3D prin (Score 1) 143

Theoretically you need to;
    - Level the build plate and calibrate it
    - Learn any 3D modeling software to create or modify objects, often at millimeter level precision
    - Learn the slicing software which converts your 3D object file into a file your printer understands as instructions

That's it. Frankly, the second one is a huge investment of time and energy, and while some simple 3D design is possible in very stripped-down programs, nothing BUT simple design is possible in very stripped-down programs. Autodesk Inventor and others may be more complex than they need to be, but only for a fairly basic definition of 'need'. Many folks just rely on others models and skip step two entirely, and you can get by that way ... for a while.

A bigger problem to the consumer market is the practical issues. What a consumer needs is reliability, and a by-the-numbers process. Like an ink printer, when I send a document to it, I hit 'print' and I expect it to work.

It took a long time for printers and copiers to get to that point. Even now we have issues where printers need different settings for different paper, and we still have paper jams and ink smears, and the basic functionality of a printer is significantly less complex.

So we're not there yet though. As a replicative process, any minor error grows geometrically as the model progresses, and we don't have consumer-level devices that match the precision of the expensive commercial-sized printers. The following items all have a large impact on the success of your build, and all of them are intractably linked; print speed affects optimal rafting, which is impacted by the humidity, and so on and so on.

    - Managing airflow, humidity, temp, and particulate matter (dust) around the device
    - Rafting and supports to actually allow printing various shapes with undercuts and voids (which vary based on a number of things, not least of which is the actual model)
    - Balancing heating and cooling; cooling causes contraction which results in curling especially when different parts of the build are at different temps at the same time.
    - Print speed
    - Print quality
    - Printer head wear and tear

One of the tests of these "pro-sumer" 3D printers is to try to print the same object out 5 or 10 times, and count how many times it was successful with the same build instructions. 8/10 is really good. Usually, of course, you'll have to try 2 or 3 times just to get your first 'successful enough' print - these don't count, you're just dialing the numbers in for that model based on experience and guesswork for your specific printer.

What we're left with is this; All the made-for-your-mother, 'basic consumer' 3D printers are, and will be for the short foreseeable future, akin to the EZ-Bake-Oven. They sorta look like a real oven, and they can sort of cook food like a real oven, but you're not meant to try to use it as a real oven. Stick to the company approved recipes only, and even then, the quality will be low.

So, no, I don't think they're going anywhere with a consumer device at this point in time. Maybe in another 5-8 years we'll be ready for the first widely usable one, but it's a bit too early to crow about it just yet.

Comment I wrote anti-terrorist software for banks. (Score 5, Informative) 275

I've written about this before; I used to write financial software for a living, and one of the requirements for a US bank was to provide a mechanism to detect transactions by an unauthorized person.

In short, the govt. provides a list of bad people in a text file. One name per line, all upper case, like it came out of an old batch system. We then check to see if the sender or receiver of any transaction /EXACTLY/ matches that string, case insensitive. If it's an exact letter-for-letter match, there's a flag that's set and the transaction is delayed, but it appears to go through as normal(*). What happens after that is the bank's responsibility, but that's the whole of the complexity.

Whoever made the list usually has a few variants of spelling; OSAMA BIN LADEN or OMASA BIN LADEN or OSMA BIN LADEN, for example. But that's it. Just spelling your name slightly differently is enough to avoid the flag. We're literally not allowed to add anything else, like soundex matching or handling foreign letters.

This is ~probably~ also how the TSA no fly list works, and why you still hear about false positives from time to time. It's also probably how any security works until it's been around for 20 years and they hire a contracting company to make them really good software that does what they want, instead of what they think they want it to do.

It just takes a very long time for software designed by a legislative committee with no technical awareness to morph into something usable, but that's government for you.

* - most transactions are not sent out until the end-of-day reconciliation anyway, so it looks like it's accepted like most other transactions, probably in a 'pending' state in your online balance - unless you're paying for a wire transfer or something.

Comment The job equivilent of a college CS education (Score 4, Insightful) 197

The simple fact of the matter is that a 4-year university's computer science program is not meant to provide job training, and as far as career skills go, you could pick up a CS degree equivalent of job skills in under a year.

I wrote about this the other day, on the Ask Slashdot: Modern Web Development Applied Associates Degree topic, and I'm sticking to my guns on it. You don't need any math more complex than simple algebra. You don't need any theory classes.

Some of these theory classes may provide better insight, and lacking them may limit you if you're attempting to enter a highly specialized, complex field with no demonstrable experience in it (which, by the by, doesn't really happen), but for 98% of your day job, it's going to be more important for you to know how to parse and sanitize input than it will be for you to know how to write a compiler, raytracer, decompose a function into mathematical terms, perform a Big-O analysis, design a memory manager for an OS, and you'll probably never use matrices or differential equations.

Hell, the grads I see now a days haven't got a concept of efficient design, most lack basic database skills, awareness of common libraries, common development tools, never used any team-based tracking systems or source control, and so on. Unless they've struck out on their own, they're almost completely unsuitable as candidates. Many of the self-taught devs seem to have a better grasp of things, if only because they end up attempting to write usable software from design to implementation, instead of homework assignments demonstrating polymorphism and recursion.

On the other hand, for many HR departments, a degree is go/no-go. You'll never get to an interview without one, and there's no free, online equivalent for that. You'll just have to make do with having superior technical skills, and try to apply at a company that values that more than a sheet of paper.

Comment Sure (Score 4, Insightful) 246

This assumes 'web development' refers to web-based applications, not just informational webpages.

This is likely to be an unpopular opinion to many, but I don't see the huge barrier here.

I've been working as a software developer for nearly 20 years now, going from games programming to business apps to web development and machine learning. In that whole time, I can count only a small handful of times when I've ever had to exhibit mathematical skills more complex than trivial algebra. Oh sure, in college, they made me write my own compilers, I had to write my own vector math routines for my ray tracer, and so on, and I consider these valuable learning experiences. However, in the real world, where I'm employed and make money, I use software libraries for those sorts of things.

When it comes to data structures, the languages of employers today, java and c#, provide me with the majority of structures and optimized-enough algorithms to manipulate them. I don't have to do a big-O analysis and determine if my data patterns will be better served by a skip-list than a quicksort, because we just throw memory and cpu at that anyway!

The point is, if you spend 1-2 years learning to write software - not computer science theory - you'll be ready to enter the workforce. Sure, you're not going to be someone creating those frameworks, you're not going to be an architect, but you'll be able to use them. A few years of real world problems and google at your finger tips, and it's likely you'll have learned enough to start tackling those harder problems.

Here's a list of what I'd prioritize before computer science theory, in regards to employment:
      - Proficient in SQL and at least one database type
      - Familiar with IDEs, source control, bug/task trackers, automated builds and testing, debugging tools and techniques.
      - Ability to work in a group software project.
      - Exposure and participation in a full blow software development life cycle (SDLC) from reading, writing, evaluating requirements, coding, debugging, QA, unit testing, the oft-overlooked documentation, etc. Include at least something waterfall and something agile-ish.
      - Expert with HTML & CSS, javascript, and awareness of javascript libraries and frameworks.

I don't think I need to explain the value of any of these, and these practical concerns trump high level concepts like discrete mathematics or heuristic design for the entry-level developer.

Comment Socially accepted uses of a prison: (Score 3, Interesting) 326

1. Remove a danger to society
2. Acting as a deterrent
3. As a punitive measure (strongly related to item #2)
4. To provide rehabilitation

To date, analysis[1] has shown that never in the verifiable recorded history of crime and punishment, has any prison, anywhere, ever had a non-negligible impact on recidivism rates. Some pre-established percentage of people continue to commit crimes after a jail sentence, regardless of changes to enable rehabilitation. Education, trade skills, access to medicine & counselors, 'nice' quarters, access to games and exercise, work release programs, etc - no appreciable impact.

Even punishments like public shaming (very big in medieval times) have no impact on the average number of individuals willing to commit the crime again. Even torture (short of permanent harm) has no real lasting impact, though it does often result in the individuals using more effort to reduce the risks of getting caught.

In short, prisons do not rehabilitate prisoners, and they never have.[2] [3]

Pretending they they do, or can and then making screeching noises when they fail - or worse, throwing money at them so they can try yet another fad get-lawful-quick program is just irrational. Blaming the system for not working as one expects only shows the value of those expectations.

Here's the takeaway: The only things prisons are good for is removing a danger from society and providing a punitive threat as a deterrent - and even that last one has only limited impact.

For those interested in constructive comments, the fix is obvious and simple; spend that money on fixing those parts of society that give rise to crime. Focus on education, focus on a two-parent household, focus on employable skills, and so on.

[1] - oy. Google it, read some books, and take a few criminal justice classes. Personally, I'd start with this book, http://www.amazon.com/CRIMINAL... because it's a fascinating read, but your mileage may vary.
[2] - though there's nothing to say they couldn't eventually. Maybe cryogenically freeze them and subliminally imprint upon them the desire to knit when they're stressed? Could work.
[3] - Technically, life in prison works, in that they don't commit any more crimes, but the important point to note is that rehabilitation programs STILL have no impact on this rate. So it doesn't count either.

Comment Re:My thoughts as a die-hard Thief fan. (Score 1) 110

Were you talking about the daily news and world events? The political history of most existent countries and almost assuredly the history of those that no longer exist?

Sure, there's less metal golems and tricksie lords, but what you're describing is how the world actually seems to work. You can't shelter kids from that, and if you do, the result will be an individual incapable of dealing with reality. It'd be like living on the "Small World" ride until the age of 12 and only then being released into the world. That's a hit to a psyche.

Comment Pre-scarcity = revolution (Score 2) 888

In the timeline of a pre-post-scarcity world, we have a population of unemployed individuals which will grow as job growth - especially unskilled blue collar labor - flattens or becomes regressive. Until we're in a post-scarcity world, however, these individuals will be in a society that requires money for things like housing, food, shelter and clothing - whether it comes from the government or not.

At some point, the government simply won't be able to provide; their budget will be scraped too thinly over the nation. This is one of those situations where we'd be hard pressed to iteratively progress - it's a "flip the switch" sort of thing. Doing otherwise will create a massive underprivileged underclass, who are likely to be quite frustrated by their life; no job or job prospects, subsistence level living, inability to focus on personal goals or desires...

Two things can happen at this point:
      Those who have focused their lives on acquiring wealth, the super rich, the 'haves', the ones who are most defined by the benefits wealth has brought them, they can all become completely selfless altruists, and together, agree to reduce their primary value to near zero by agreeing to, effectively, eliminate money in the spirit of pure socialism. Thus, utopia is achieved.
    Alternatively, they will not do that, and at some tipping point - say, 60% unemployed - there will be a revolt that destroys the current economy, form of government, and so on, settings us back to 0 on the cultural progress - and likely technological/engineering scale, but removing the then-existing artificial constraint that says work=money.

I really don't see the first happening. Do you? Am I overlooking some important alternative choice?

In actuality, I think we're headed towards a more corporation-centric outcome, as predicted by many of the darker sci-fi novels out there, rather than a post-scarcity world, but hey, that's just my opinion.

Slashdot Top Deals

The key elements in human thinking are not numbers but labels of fuzzy sets. -- L. Zadeh

Working...