Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Re:TAG: NOTNEWS (Score 1) 134

It also boasts a worst in class standards support. When building advanced web services, Chrome's lack of support is a big enough pain. IE 11 is still about 3x as bad, but it is getting better. IE 10, in particular, was a huge improvement, but I often wonder why they still bother trying to build a browser from scratch.

Comment Re:Really? (Score 1) 310

I wrote a more extensive post on this earlier. While some teachers clearly go above and beyond what their peers do, teachers average 3 fewer hours per work week vs other professionals (without counting a teacher's extra vacation), and if they use all of their leave, will only work 171.5 days/yr on average instead of the 220 typical for a professional with 10 years of experience. Incidentally, the median experience for teachers is about 3 years, so a comperable a professional would be working even more days a year. To cap it off, teachers are paid 11% higher than other professionals. What this means is that the average professional is working >1/3 more hours for 10% less pay compared to a teacher, and that the teacher is earning 50% more per hour. If you want more details or sources, see my previous post on the issue.

I'm not sure I understand your point in your analogy to programming. I usually do just "type it once and run the program." Sometimes I'll get fancy, and type it 3 times in 3 different ways to make them race against each other. (It's kind of like battle bots but nerdier.) Then again, I've known brilliant, practiced teachers, who could pick up chalk and go.

Comment Re:Really? -- Lets look at actual numbers (Score 1) 310

The local high school here only has 4 classes during a day, and I'm pretty sure the teachers get a free class period. They at least got them when I attended. The teachers I know are some of the few great teachers you don't want to miss out on having. Such employees are almost always underpaid.

Lets stop talking about anecdotes, and look at some hard facts. The median compensation package for public teachers is $75k/yr (source) and they have a median of around 3 (maybe 4) years of experience (source). The BLS states that teachers are paid 11% higher than other professionals (source). At 53 hours/wk (source, it sounds like a lot of work, but it is 3 hours fewer than most professionals, even without considering vacation time (source). Considering vacation time, teachers who use all of their days of leave work an average of 171.5 days/yr vs 220 days/yr for private sector professionals with 10 years of experience (source), which isn't quite a fair comparison because professionals with 10 years of experience get more vacation than people with 3-4 years of experience.

If you multiply this out, most professionals are working over 1/3 more hours than teachers for 10% less pay. They generally get off of work early enough to make a dentist appointment, avoiding the need to shift hours around like other professionals, and their extra hours outside of the school day are free for scheduling as they see fit. Really good teachers might be working long hours for their money. However, when they're getting paid 50% more per hr, it's clear that most are not.

If you want to argue the difficulty and stress of a job, then that would be a different matter than I've discussed. It won't be fixed by reducing hours or increasing pay, but fixing polity.

Comment Re:That's some crazy shenanigians right there. (Score 1) 303

So if Google wants their fork to interface just like Oracles, they should take a clean-room approach and have all the API's function exactly the same way, but without being derived from Oracle's source.

And they have to function EXACTLY like the one they're copying. That's the entire point of using a standard API. If that aspect is copyrightable, the content of the API of how things interface to other things, then this is sheer madness.

Google did use a clean-room implementation. The judge did rule that Oracle now owns exclusive rights to how some 6,000 function calls are defined and organized for the next 80-100+ years (depending how much Congress extends copyright in the next century) because the rights improve progress (the only grounds for granting copyrights or patents).

This means that any implementation of C or C++ probably violates the copyright on the respective language's standard library. Google can be sued again for offering a copy of gcc as part of the Android toolchain, along with any Linux distribution that makes a compiler available. Furthermore, the file and edit menus could be considered as much of an API as any programming language, as you're running the algorithm to tell another piece of code to open a file or whatever. It actually was a substantial work to first write something so elegant, so does the original inventor get to sue everyone who writes a program that has Edit->Copy in the menu in the year 2040?

Where can the madness end? What makes an API fall under copyright, and not a protocol? HTTP is copyrighted. Does Tim Berners-Lee get to sue every web server, every web browser, and every use of a REST API?

Comment Re:Oh PJ, where art thou? (Score 1) 303

Congress is given the power "To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries;" (US Constitution Article I Section 8) The idea that locking an API into 100+ years of exclusive use promotes progress is asinine, and any judge ruling to do that is ignoring our highest laws.

Comment Re:It's the right tool for the job (Score 1) 634

That was with numpy in python. In C++ I was using the Armadillo library, but I'm not entirely sure what is going on underneath that. Both blas and lapack are dependencies, and it uses different ones for different operations. However, I'm not sure if it was using blas3 or openblas because they're both installed.

I should point out that this was for a machine learning class, so the matrices were fairly small. At a larger scale, the difference may have been more negligible due to other overhead, but I do know that in C++, >98% of the CPU time was typically spent on matrix operations. However, Armadillo supposedly provides some nice runtime optimizations, and I seem to remember sizable compile times, given the small programs I was writing.

I was using C++, but if Fortran was responsible for a good, compiled library then I'm glad they used it. It's a language I may have to look into, when I get a chance. I hate to see good hardware go to waste, simply because of shoddy software.

Comment Re:Why is anyone still using C++ in 2014? (Score 1) 634

I use it because it because I can get it to run a very significant amount faster with 10% of the memory footprint of Java. Java, in addition to going through a slow, buggy, insecure vm, doesn't let me do the same optimizations, and consequently, the resulting Java code would take at least 3x as long to run (python is probably worse).

My C++ source code generally ends up smaller too, especially when I have to interface with a database. JDBC is an awful mess compared to OTL. My code is bug free enough that my problems are are almost entirely limited to C++ tweaks (in my personal libraries) that I can't do in safer languages or using bad algorithms, which tend to come from business requirements (or misunderstanding thereof).

I'm sorry you consider lambda (anonymous) functions and expressions, range based loops, standardized multi-threading, regular expressions, smart pointers, and type inference to be obtuse. Your programming world must be small or you just really hate C++. If you know of something better, where I won't take a meaningful speed hit, then let me know.

Comment Re:Netflix is a terrible test case (Score 1) 227

In-state long distance can get reallh bad. My call gets routed across the country to California or the east coast; placed as a new, cheaper, inter-state long distance call from a VoIP provider; and maybe re-routed back to the number I dialed after 5-20+ seconds. I did some data crunching on the issue for a customer. Also tried to get one of the states to comission an investigation, but they just didn't care that much.

Comment Re:Competition (Score 1) 258

I estimate they'll be serving about 300,000 people in Kansas City metro area by the end of the year, with more locations already planned into 2015. Market penetration is about 50% and the population of Kansas City, MO itself is around 450,000. First, (IIRC) TWC predicted single digit penetration rates or maybe 10%, then they predicted only a risk of losing 1% of their national customer base of 14 million subscribers to Google fiber. Now, they're facing a possible 1% national loss of subscribers from just KC, with 10 more metro areas lined up.

TWC is so terrible at upgrading their services in KC, they would lose everyone possible to Google if they tried to upgrade to compete. For months, I was losing packets over TCP tunneled inside TCP, and bandwidth ping tests all came back "fail".

Google won't offer up the designs of their self-built networking hardware, like Facebook does with their hardware. It's not Google's style because their hardware is a big part of their advantage, but they have made some comments about the general design, like customers being wired directly to hut sized switches that sit on the backbone. They have bought a minor ISP (or maybe two). I don't expect them to buy anyone major, mainly because few companies are doing fiber to the home.

Comment Re:Competition (Score 1) 258

They haven't done anything for me in Kansas City, except for a minor bump in standard tier speed of Time Warner Broadband (which caused major network disruptions, severe packet loss, ping times of "fail", and still leaves Netflix unstreamable on Sunday afternoons) and several door to door AT&T U-Verse people trying to tell me their fiber to the node service was the same thing as Google Fiber. I have an unfortunate location 1/4 mile outside of a service area, which manages to get me painful reminders to sign up, even though I'm not eligible. Currently, people are reporting that Google has a 50% penetration rate and in places it's as high as 75%, which may have something to do with the competition behaving differently in Austin.

Comment Re:Competition (Score 1) 258

AT&T, Verizon and Comcast do have billions in profits each year, which they could spend on advertising. Time Warner Cable only has about $500 million per year in profit, but their cable is already saturated with their advertising. Google has done relatively little advertising, mostly using resources like their Fiberspace, Mobile Fiberspace and retail stores. (source: I live 1/4 mile outside Google's service area) They have enough people talking about the service without buying advertising. Seems like I can't see anything online about Google Fiber without reading a pile of comments begging Google "Please come to ...".

I don't know about Google's other deals, but in Kansas City, they pay the same price for utility pole use as the competition (sadly, not all of the fiber will be buried). I suspect it's the same for all of their deals, since all the competition got their pole usage expenses lowered retroactively after Google negotiated prices down. Google also maintains a larger network than Level 3 (Level 3 Communications...), and builds their own network equipment (including what they sell/give Fiber customers). One of Google's strengths has been their continued ability to find ways to operate for less money than the competition. What costs are you educating them on again?

As for cherry picking customers, they just install in the most eager places first, which makes sense when you keep a 2-3 year backlog of installs to do, just in the KC metro area, and you have plenty of suburbs that you haven't even talked to yet, with penetration rates typically around 50%. Time Warner Cable also picks what areas they will and won't service. I have a neighbor who can't get their cable in their area (which somehow consists of just their house), despite being completely surrounded by Time Warner Customers.

Comment Re:cry of a dying business (Score 1) 210

ISP's have been hounding Google to pay for bandwidth at least as far back as when they bought YouTube, and even some countries (e.g. France) are trying to put pressure on them. Their policy has always been to fight difficult and expensive battles to set a favorable precedent. So far, only one company (Orange) has been successful at making Google pay for peering. In 2010 there was a study showing Google's network was the 2nd largest, only out sized by Level 3, and I think I read in a financial report that Google's network is now larger than Level 3's (article here). Google's private network to connect all their data centers operates at near 100% capacity all of the time (via OpenFlow), which was upgraded when it was considered really good to only waste 60-70% of the bandwidth. Google even builds their own network equipment because no one makes anything that will meet their needs. People were acting like Google was new to this networking game when they started to roll out Google Fiber. In reality, Google has a huge network that they manage and on the order of $10 billion in profit every year that will just rot (thanks to the Federal Reserve's inflation policies), unless Google finds ways to reinvest it. It's 3x the profit (and growing) of AT&T and Time Warner Cable combined, which are the two ISP's in my part of Kansas City.

I would guess it costs Google $300 per customer to run fiber, since that's what they charge for installation (waived with 1/yr Gigabit service), but that may not include everything Google pays for. I know they're paying $5 each for utility pole access and charge customers $100 for a replacement fiber jack and $200 for a replacement network box, which I assume includes some overhead in the price. They also do bulk installs by neighborhood instead of going all over to individual houses.

I expect they will roll out service to all 34 of the new cities in 9 metro areas ASAP, as long as the cities cooperative enough. Overland Park, a suburb of KC, had their offer revoked for asking questions. Their council was supposed to be approving the offer at a meeting, but instead decided to wait for clarification of one of the terms of the contract. I guess Google is too busy with people that are begging for service and offering their first born child for them to deal with questions at this point. They have a backlog through 2015 in the Kansas City area (city limits should be complete this year), and they still haven't announced anything for where I live, which is a painful 1/4 mile outside of a service area and less than 4 miles in any direction from Google Fiber. I'm stuck working from home with Time Warner, who also runs the connection at my data center 3 miles away, and requires me to tunnel my ssh session inside another TCP stream to avoid massive packet loss thanks to "upgrades". I never expected to lament losing the speed and reliability of dial up.

Comment Re:What Level 3 can do (Score 1) 210

I think the problem that people are really complaining about is ISP not having capacity for recurring, daily/weekly peaks. For example, my parents cannot stream Netflix on Sunday afternoons. For a while, I could barely run ssh to my data center 3 miles away, with few second delays and dropped packets. This was definitely a different problem than L3 described, as both computers used Time Warner for an ISP.

Slashdot Top Deals

"Here's something to think about: How come you never see a headline like `Psychic Wins Lottery.'" -- Comedian Jay Leno

Working...