Please create an account to participate in the Slashdot moderation system


Forgot your password?
For the out-of-band Slashdot experience (mostly headlines), follow us on Twitter, or Facebook. ×

Comment: Re:Sorry most Americans... (Score 1) 119 119

Martin reps are claiming the parachute system will start functioning in just a few meters. As Ellis mentioned below, this obviously must mean they're ejecting and inflating the parachute via some sort of mortar, and as such, it probably starts working almost instantly.

As I wrote in my initial post (surprised it was first post, weird) those are still my biggest concern. The comments in the articles and videos they make to not comfort those fears.

Have you ever fallen from "just a few meters?" I have.

As a child I fell from the monkey bars at school, also "just a few meters". I landed badly on the schoolyard gravel designed to help reduce injury and still broke my leg badly.

My daughter also once slipped while running on wet ground, fell and twisted her leg badly. (Genetics?) After a surgery, two screws to hold the bone together, and six weeks later, she was learning to walk without a limp. Or you've got Kevin Were who was in an Elite Eight basketball game, jumped wrong, and broke his leg badly, leaving his tibia stabbed through his leg. People break their bones from ground-level falls all the time.

I'm fine with being in the air. I'm sure when everything is going perfectly and it is zooming through the air, that experience is likely more awesome than anything. And if the motor gives out at 800m, the parachute will be fine.

No, I'm worried I'll be either launching or landing, have a critical failure while near the ground, and end up hospitalized, seriously injured, or maybe fatally injured. Either that, or have a critical failure that drags me across fields or across pavement, leaving a trail of skin and blood on the ground.

Comment: Sorry most Americans... (Score 4, Funny) 119 119

lift humans weighing up to 120kg (~256 lbs)

So basically half of Americans are excluded. Got it.

On a more serious note, there is NO WAY I'd do it. Not because it wouldn't be cool to fly through the air, I'd love that part...

...It is the landing on the rock hard ground I'm concerned about.

Comment: Re:The question many want to ask, but don't dare t (Score 1) 382 382

What do you really think about systemd?

He has answered that many times. I want a slight variation.

Last year he gave several mentions about it to several key groups. He expressed that "I don't actually have any particularly strong opinions on systemd itself. I've had issues with some of the core developers that I think are much too cavalier about bugs and compatibility, and I think some of the design details are insane, but those are details, not big issues".

He's mentioned in several interviews that he has needed to deal with fallout from the system, deal with major bugs in it. He's also had some very public, verbally brutal interactions with key members of the team. But those are less relevant from the technical side. Systemd developers are attempting to correct what they believe are defects or missing functionality.

My variant would be: How has systemd's expansion affected your work on Linux? More specifically, over time the needs of systems change and drift, and core features need to adapt. What features of systemd have you considered as features missing from the kernel that should be incorporated, or as missing features that should be incorporated into system libraries?

Windows has had similar infighting over the years where the Shell folks were implementing all kinds of useful and interesting functionality that really had little to do with the shell: path functions that should be in the storage libraries, notifications that should go through kernel, numeric validations that belonged in the core, and so on. It is always a balance to decide what belongs as core features versus what belongs in side libraries. Systems evolve over time: How much driver support should be in the kernel? (Different OSes have different theories.) How much networking support should be in the kernel? (Decades ago the answer was usually "none", now it is heavily supported.) What security aspects belong in the kernel? (This used to be largely ignored, today it is an ever-growing concern.) Over time the balance changes.

I think part of the systemd concerns are that they implement many features which -- within Linux's two decades -- have transitioned from being minor external tasks into becoming universal system requirements. The boundaries change. I'd like to know how Linus is working with (or against) the inevitable winds of change.

Comment: Re: I was wondering if/when this would be on /. (Score 1) 86 86

That's pretty broad and vague. Does the website which is registered in my name but which is actually my girlfriend's for the local comedy scene count? It has no ads but helps comedians get stage time, some of it paid. It seems to me that any site with an ad portal or an affiliate program link would count. Not just people selling widgets.

That's the issue.

Ads alone are enough to qualify a site has having a commercial purpose.

Run a blog with ads on it? That's commercial, your real name and real address needs to be on it.

Consider the many blogs revealing TSA problems, The popular "Taking Sense Away" was ad supported. The tsa employee running the site would need to reveal his real name and address.

Consider Groklaw, a site that many /. users referenced for the decade it was in operation. The founder (for good reasons) wanted to mask her identity and personal information. The site was already suffering from privacy issues and JP was in the process of shutting it down, but after learning more about a loss of encrypted email, she ended the site. Since she accepted donations on the site, that would qualify it as a commercial site with mandatory release of identity.

The recommendation of "commercial purpose" is overbroad, especially with the current definitions of commercial.

Comment: Re:If you cannot answer your own question.. (Score 1) 296 296

I'm someone who gets contracted to optimize this kind of code. Unfortunately like most good technical problems, the answer is "it depends."

Pulling in some quotes from your various replies and comments scattered across the discussion:

My perfect solution would be developing it in C# while having complete control over memory allocation and release ... Linux+Windows ... I need to be able to allocate and release memory manually. I have done some prototyping in Java and C# hoping I could control garbage collection enough for my needs, but it isn't possible

Since your replies are talking about cross-platform C#, that almost certainly means Mono instead of the MS implementation. That's a good thing for you.

For your garbage collection concerns, Mono ships with two garbage collection implementations. One is SGen, the other is Boehm. Both can be modified for your needs, if necessary. Java's implementations vary by VM implementation, and that leads to problems for most developers.

On the nature of GC and Java in particular, it is difficult to write code using existing libraries that is memory and cache friendly, as many libraries (especially in Java) perform small allocations and will duplicate data without thought. In one recent project I was brought it to help fix, the developers (in Java) were, as a general practice unexpectedly allocating and using large amounts of memory that destroyed performance. For example, the libraries and code would allocate and manipulate over 4MB of memory to manipulate a buffer that was guaranteed to always be under 80KB. They took warnings from static tools seriously even when it was unnecessary; Sonar warnings that data was shared is a security concern meant they would make a copy of huge amounts of data, even when the only consumer of the data was their own program on their own servers. It took some work but reducing all the processing of data in their server program so it fit into the L2 cache dropped latency from milliseconds to nanoseconds. It is certainly possible in Java and C# to tune your program to minimize cache misses and more carefully control object lifetimes, but it requires a serious effort. In my experience this is mostly about the developers, C# the developers tend to less frequently make use of an unbounded number of copies of objects. :)

There is no UI required for the project, although I realize you can use modules like QtNetwork without the UI libraries. ... It is similar to writing an database management system ... There is no UI component

In this case, writing your UI should be a separate endeavor from writing your main project. Use whatever tools and frameworks you are familiar with to build the UI, quick-and-dirty. Don't worry about the choice QT or other UI tools, focus instead on your communications protocol. Keep the data minimized and tight. If you decide to throw in an intermediate web server like Tomcat to give an HTTP interface, make that into a third, separate application. One application is your real server, one application is your tomcat server that talks to your server and to your clients, and your UI application. If scaling to large numbers is important, the extra layer can introduce some latency, but in exchange lets you take advantage of all of Apache's nice web page features without bloating your real server, which can be leveraged for scaling up to multiple load-balanced web server boxes that communicate with load-balanced core servers. (All depending on your project details that you haven't shared).

In your project, remember to keep IO asynchronous from processing. Any communication you have through your network IO should be in a separate set of threads, and NOT in one thread per connection (a strangely common mistake). Boost's asio library is pretty good at this in a cross-platform situation, and is implemented with platform-specific libraries that are quite speedy.

You didn't mention what else your program is doing, but several of your replies you mention "like a database management system". Just like networking IO, any disk IO should also generally be asynchronous and use OS-specific functionality that enables the system to best make use of its resources. Depending on details if you touch any kind of spinning disk you want to take advantage of head placement optimizations the OS can make, or take advantage of SSD optimizations, or take advantage of low-latency data streaming commands, or take advantage of bus commands and direct-to-memory transfers, or take advantage of direct disk-to-disk operations that never cross the IO card, and so on.

As for general advice....

Keep your data in the cache, accessed sequentially, keep it small and properly sized. The fastest processing is the processing you never do, don't load if you can help it, don't encode/decode unless you must, and if you're in a position you can preprocess data so it maps directly to the final memory layouts it can reduce all load times down to the disk transfer speed. Remember that traveling distance through the hardware tends to cause performance to drop an order of magnitude or more for every system you pass through, keep all work and memory on the card or chip or disk array where the data is already at. Wait for nothing, asynchronous is your friend. Finally, learn to love profiling tools and performance tools, no matter what language you use.

Comment: Re:Salaries should be limited (Score 4, Insightful) 381 381

Professionals manage their own time, take breaks when they need to, finish their work, and don't use a time clock.

While it would be nice if this were true, many companies leverage the attitude in order to add unofficial overtime.

They are not demanding you work 50 or 60 hours. Instead the boss demands that workers finish a feature by a specific date. In order to meet that date the extra time must be submitted.

Usually that can be avoided by good interviewing and identifying those companies. I've had one job that had that mentality, and it lasted about six months (when the next job was lined up.)

If the workers are putting in unpaid overtime that is a symptom both of managers who abuse their workers, and workers who don't value their time. If the workers started to value their time they'd demand change within the organization and leave en mass if it didn't change. Unfortunately, for whatever reason, they don't. Fear, fear of security, fear of unemployment, fear of change, fear of whatever, something is messing it up.

While a union is a terrible fit for computer programmers due to the wide variety of work skills, it is something that comes up in discussions every few years. If tech workers and programmers as a collective demanded the change, it could happen quickly.

TL;DR: Until a critical mass of workers demand better work conditions, bad businesses won't change. Good businesses already treat people with respect.

Comment: Re:Makes sense (Score 1) 272 272

But this isn't even a trademark dispute, its a company policy dispute.

Exactly. So many people seem to miss it.

YouTube's ToS section 7 gives the method they would need to use to terminate his use of the lush URL. Specifically their ToS says: "A. YouTube will terminate a user's access to the Service if, under appropriate circumstances, the user is determined to be a repeat infringer. B. YouTube reserves the right to decide whether Content violates these Terms of Service for reasons other than copyright infringement, such as, but not limited to, pornography, obscenity, or excessive length. YouTube may at any time, without prior notice and in its sole discretion, remove such Content and/or terminate a user'su account for submitting such material in violation of these Terms of Service." (Typos in original)

So while YouTube gave themselves discretion to remove his access if it violates the ToS, and they gave themselves broad permissions in interpreting the ToS, ending his access still requires a ToS violation.

Their ToS and policy about what is required to change the endpoint are clearly specified in the ToS. The one violating their ToS is YouTube, not Mr Lush.

Comment: Re:Makes sense (Score 4, Insightful) 272 272

Besides, "Lush" is a standard common usage word that is neither copyrightable, nor trademarkable. IANAL

It is absolutely protected by trademark.

The very fact that he had used it in commerce give it automatic, de facto trademark protections. Even if he did not register the mark, it still has protection; defending an unregistered mark has a higher burden of proof, but by his use in commerce he automatically gained several legal rights relating to trademark. If he had registered his mark, the protections would be even stronger.

But moving on from trademark, there is also the issue of YouTube's ToS agreement.

And that is where it gets REALLY interesting.

It is quite possible that Google/YouTube violated YouTube's published ToS in this. Their termination policy (part 7 of the EULA) is for (A) repeat infringement of the rules which doesn't apply here, or (B) if "YouTube reserves the right to decide whether Content violates these Terms of Service for reasons other than copyright infringement, such as, but not limited to, pornography, obscenity, or excessive length. YouTube may at any time, without prior notice and in its sole discretion, remove such Content and/or terminate a user'su account for submitting such material in violation of these Terms of Service."

While they do reserve the right to interpret their ToS, that doesn't mean they can make up reasons outside the ToS.

Comment: Re:Grand opening! (Score 5, Informative) 97 97

Let's Encrypt, a division of Shell Company, LLC., a wholly-owned subsidiary of Totally Not The NSA, Inc.

You seem to misunderstand the purpose and nature of these certificates. While it is fun as a joke, that isn't what it is for.

These certificates never have been meant to protect against either government agencies or against employers. It has always been known by security geeks that any intermediate actor in the chain can eavesdrop and can intercept the connection. That is not what they protect against. They protect by revealing the links in the chain.

SSL is intentionally vulnerable for those implementing a MitM attack, and many businesses and schools implement this. Quite a few major networking products have simplified MitM down to the point of simply hitting a checkbox. One of the biggest corporate reasons for this is to enable caching.

SSL is absolutely vulnerable to being (eventually) deciphered by anyone who eavesdrops, and is vulnerable to being modified by any person holding a matching cert for any point on the certificate's security chain. There are many accounts that major governments already have copies of those critical points.

So what does it offer? The most immediate benefits are replay prevention and an integrity guarantee. Imagine if an attacker recorded a session of you logging into your bank and transferring funds. Without replay protection, and with no other replay protections by the bank, an attacker could replay the transaction over and over and over again, draining your bank account. Since both client and server theoretically offer unique session keys for each session they cannot be replayed. The integrity guarantee is also important, meaning that once your connection is established, those monitoring your connection cannot modify it without it being detected. The integrity guarantee is fairly weak and easily subject to MitM exploits unless properly configured with EV certificates or using two-way TLS and requiring mutual authentication. Basically you can detect all the links in the chain, but if one of those links is already compromised that isn't the protocol's fault. If someone inside your trust chain is intercepting and re-encoding your messages, the protocol won't stop it; all it will show is the person is a link in the authentication chain.

It also offers moderate degree of protection for authentication that the host you are connecting to matches who they claim to be; that is, with a TLS or SSL connection to, if you know the certificate, then you have an authentication chain that the site matches. Just like the integrity guarantee, the protocol shows you all the links and nothing more. You still need to watch out for weak links. If one of the links in the certificate chain includes your corporate proxy or school's servers then you should assume that link in the chain is compromised, which is the most common MitM attack.

The protection most people think of -- the protection from eavesdropping -- is only a very weak protection and not guaranteed by the protocol. The encryption adds a cost to any eavesdroppers not part of the security chain, but for most of the encryption protocols that protection is minimally overcome with a large budget.

Comment: Re:Uber doesn't own the vehicles, correct? (Score 4, Insightful) 346 346

Am I missing something here?

Yes, two things.

The first thing is that you are using your own definitions and not the ones applied by labor law. There are six guidelines by Department of Labor. (Integral to business, permanency of relationship, worker's investment in equipment and facilities, nature and degree of control by principal, worker's opportunity of profit/loss, and skill/training necessary. While your brief lists are interesting, they don't match what the government actually uses.

The second thing you are missing is the definition of contractors. This is about the legally defined "independent contractor" or 1099'er, that are one type of contractor who is effectively a person operating as a business. There are other types of jobs that people refer to as contractors, such as short term employment (w2 with a time limit), or cases where employees of one company are brought in to work with another company's employees. Their decision is only about the 1099 style of contracting, which Uber uses.


Going through each of the government requirements as they apply to Uber and your Ebay seller example:

Integral test. Uber's core business is connecting people for rides and moving funds between accounts. Drivers provide rides using the service, but they aren't integral to the business of connecting people (although they are necessary to implement the task). Ebay sellers similarly use the service, but aren't integral in providing the service. MOSTLY NEUTRAL, slight bias toward employee.

Permanency test. Some Uber drivers meet this, others don't. Those who infrequently pick up riders, those who are on for an hour or two during the day, they're not really permanent. The ones who have used Uber to replace their income, or drive for many hours each day, they're much more permanent. Most ebay sellers are extremely transitory, having items up for under a week, or using it as a store front for goods that are constantly rotated. WEAK FAIL, some people biased towards employee, others biased toward 1099'er, so maybe some people should be reclassified.

Investment test. Uber has some investment through insurance and their guarantees, but leaves most of the cost to the individual. They've got a weak investment. Ebay has nothing invested in the sellers. WEAK FAIL, the long list of guarantees and insurance they offer to their drivers pushes toward employee.

Nature and degree of control test. Uber has a high amount of control, coordinating all the details of rides,establishing fares, and causing the drivers to be redistributed based on their algorithms, and requirements about the cleanliness and maintenance of the vehicle, but they also have weak control in other areas by not dictating work hours and a few other details. Ebay has zero control. STRONG FAIL, Uber's heavy control over what drivers do pushes strongly toward employee.

Opportunity of P/L test. Uber sets the fare cost, and takes a cut, the driver gets no options. There is no opportunity for additional profit or loss. Nothing they do personally can modify their results, get more business, get better rates, or otherwise modify the opportunity of profit and loss. For the ebay example, Ebay sellers can operate under whatever terms they choose, including running full brick-and-mortar stores, which many sellers start and operate as. STRONG FAIL, these "independent contractor" Uber drivers cannot operate as a business independently.

Level of skill/business acumen test. Uber drivers are hired for being able to drive. They cannot really market themselves independently, take good advantage of business insights, leverage their own personal strengths, modify their business based on any personal skills or talents. Nothing they do personally can modify their products or results. Strong contrast with Ebay where sellers have a large degree of control over what they do and how they do it, what they sell, how it is presented, and other factors of skill and business acumen. STRONG FAIL, these "independent contractors" cannot operate independently, leverage skills, or add any effective flair.


When it comes to tax status and employment status, I'm pretty sure the commission got this one right, or at least, right for the common case. It may not fit very well for those who only run the app a few hours each month, those small percentage of drivers might be better classified as 1099 independent contractors. But those driving more than around ten hours each week probably fit better under the employee definition, and those driving more than twenty per week strongly fit the definitions of employee.

Comment: Re:for 1099ers W2 contractors working for a firm / (Score 4, Informative) 229 229

for 1099ers W2 contractors working for a firm / outsource don't fail under that rule.

That's an unfortunately common misunderstanding.

There are a lot of things bunched into the "contractor" name in recent years:
A. Working for a company under a 1099 tax reporting system, the person operates under their own business independent of the company. This is a real "independent contractor".
B. Working for a company under a W2 tax reporting system, the regular employee loses their job at the end of the temporary employment. This is a temporary worker or contingent employment.
C. Working for a company under a W2 tax reporting system, but that company is closely working with another company and the individual is assigned to work under their purview. This has many different names.

The guidelines they are supposed to use, which Microsoft and many others have gotten in trouble with, is when they bring in people in group A -- independent contractors under the 1099 tax system -- and treat them as though they are group B or C -- regular employees under the W2 tax system whose employment contract may or may not have a built-in termination date. This is mostly about tax differences, since the government generally gets less revenue from option A.

Many companies will bring in people through contracting companies like Deloitte or SAP. That is case C. These people are employed by one company as regular employees, and the two businesses have a working agreement. The individual is a regular worker and needs to have all the regular labor laws followed. This arrangement can happen for many years. Giving non-technical examples, you may have a car rental company with a single worker at an auto repair facility, or have building security hired through one company where the individuals report to work at the facility yet are hired, paid, and given other benefits by another business.

To confuse things, many times the companies involved in option C will hire their workers under option B. The workers are brought in from a separate company like Deloitte (option C), and those workers are hired by Deloitte as W2 workers with a temporary employment agreement (option B).

Unfortunately for workers, big companies often confuse the rules for them, calling them all "contractors" and dumping them under the same rules. Workers who were hired under option A must be able to work for additional groups. Companies get in trouble with option A when they keep the person too long since they stop looking like independent contractors and start looking like regular employees. When companies lay off lots of "contractors", usually they are laying off people under option B or C, but then refuse to hire them again because that is a rule for those under option A.

Comment: Re:I'm one of those people (Score 1) 336 336

I'm another who has been both in and out of the industry several times.

I only agree with one of your problems:

I do absolutely agree that a crunch is entirely the failure of management. Of the published games I've been on, only one suffered from a moderate crunch. Everyone, including the management people involved, were able to identify the management problem of having more features than we could meet within the date. Unfortunately for the studio it was with a well-established global brand and few features could be cut without major financial penalties, and budget constraints meant the date was difficult to move. That particular studio was in a downward spiral. For the studio management it was a choice between the bad contract or laying off the entire team, about half of the company. They were responsible enough to make it clear to the team that those were the options, even going so far as to putting it to a private vote, either voting for half the studio to be laid off or to work on the tough project and keep their jobs. Most of the team decided to keep their jobs (while sending out resumes). Ultimately about half the team quit upon finding a different employer.

Your other issues are not industry wide. The FPS map design issue is just that, FPS maps, and FPS makes up a tiny portion of the industry by numbers; take it up with the level designers if it bothers you, or talk to the designers about changing the mechanics required. Of the 14 titles I've finished and the roughly 30 other ideas I've helped scope and prototype, only one was an FPS, and ultimately we didn't go that route. For the payment complaint, the freemium model doesn't have much to do with the developers directly, more to the design and business ends. If you can help identify a better model for your game that integrates well, go for it. Freemium can work well especially in mobile where people are expecting free-to-play for almost everything, but it isn't the only model. And the comment about cost of content depends entirely on the product, many types of games can be built without relying on expansive (and expensive) 3D worlds with realm after realm of costly hand-made content.

I'd say pay and respect are the two biggest issues. Too many studios fail to treat their developers with respect, they disrespect them by failing to do their job of properly managing the products, they disrespect their time and wage by not scoping projects and requiring overtime, they disrespect by not communicating what they know. Many studios are good, some are terrible. I've found both tend to be good at the small companies of 10-30 people; it obviously varies by company and requires identifying the bad places, but the ones I've worked at tend to be great places to work that can pay quite well and generally avoid overtime. The trick is finding them while they're small, sifting the good ones from the bad ones, and then realizing when it is time to move on before they transition from well respected craftspeople that are nicely compensated into corporate drones with seasonal layoffs. In my region I do take about a 10% pay cut by working in games and enjoy the extra money when I work in business software, but even at the lower wage it is still a solid six-digit wage that puts my family well above the middle class.

And as for your list, most of those games stopped being Indie years ago. There are many indie games festivals with great new products if you are looking for innovation.

Comment: On Shopping Around (Score 4, Insightful) 1032 1032

The price of a college education -- let's just say 4-year bachelor's degree -- isn't the problem. Rather, it is a symptom of both the ability to get a large student loan, and desire for a traditional, 4-year degree.

As an analog, consider the housing market: The value of a house is what someone is willing to pay for it, and what someone is willing to pay for it is a factor of their assumption about its future value and their ability to fund the purchase with money they don't already have.

No, not all homes are equal, nor are school tuition rates. There are a relatively small number of multi-million dollar mansions, but apartments and inexpensive homes are plentiful.

The article is more like someone complaining that a Ferrari is expensive and refusing to consider the thousands of other lower-cost options.

Too many people look at costs of a single school. There are a huge number of schools, Wikipedia saying 4,726 in the US. The median cost of schooling across all schools is $5,832 per year, which is quite reasonable. Half of them cost $5,853 per yer or less. Yet the mean is $23,874 per year. Assuming you are comfortable with statistics, those two numbers mean the bulk of schools are inexpensive, and a small number of hugely expensive schools cause the average cost to skew quite high. As a parallel, it is like a middle-class neighborhood with a small number of billionaires who moved in; those few high-value individuals will dramatically shift the average wealth in a neighborhood to so the "average wealth" means everyone is a millionaire even though nearly everyone is middle class. The median cost of higher education is reasonable. Just be smart and pick a school you can afford.

Locally, my kids can go to one of several good junior colleges nearby which all cost about $1500 per semester, then move on to one of the several state universities that cost around $3500-$4000 per semester. So about $25,000 total for the four years of education. I note that for my region at least, Wikipedia lists 11 inexpensive 2-year colleges and seven state universities, all within commuting distance. Or my kids can go to one of the local private for-profit schools the whole time. One popular private school charges just shy of $20,000 per semester. That is, one semester of the expensive (but heavily marketed and popular) for-profit private school is the same rate as a full four year degree elsewhere.

I look at the author of the article, Lee Siegel, that Wikipedia says attended Columbia University. That school is a private ivy-league school currently and charges $51,008 per year. We could get two students all the way through their bachelors degrees with the funding for a single year at that school. And he went there for probably seven years. So he probably was committed to roughly $350,000 in costs when he could have chosen a similar education at one tenth the cost or less.

So really, this is is not so much a complaint about the cost of schooling generally. He is complaining that everyone should have a Ferrari they cannot afford, even though for most people one tenth or less the cost, getting a Prius or Accord or Corolla is both affordable and adequate.

In practice, failures in system development, like unemployment in Russia, happens a lot despite official propaganda to the contrary. -- Paul Licker