Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment The real problem - conjecture (Score 1) 526

The problem isn't "good enough" software getting the ball rolling. It's when "good enough" continues to be the MVP at scale. The scale of a project, which can be any combination of complexity, user base size, lines of code, people working on it, etc, has a limit that is a result of the quality. The whole issue with a "ball of mud" isn't that it's difficulty to start, it's that it's difficult to maintain and extend. It is quite well studied that a system grows, if you have any centralized contention, you quickly plateau in productivity, and can very well achieve negative scaling.

Amdahl's law for code quality. High quality code has low contention and allows for massive scaling, but can be quite slow at small scales.

Comment Re:The Actual Danger. (Score 1) 526

In other words the proliferation of "good enough" software has allowed markets to form that would have not otherwise have formed due to barrier of entry if "done correctly". Evolution at it's finest. Why are my testicles hanging outside my body ready to be damaged? Good enough, no time to refactor.

Comment Re:The Actual Danger. (Score 1) 526

I'm not aware of a lemon law that applies to something you took for free. When you purchase a license to use software, that contract should have some sort of lemon law protecting the user, at minimum a full refund. And of course any computer appliance should have this same protection for the software that drives it.

The problem is really marketing. We got companies claiming "enterprise" and "server". These labels should carry legal responsibility that allows an end user to sue for damages caused by professional negligence.

Comment Re:I have not noticed the internet being slow... (Score 1) 32

Higher speeds increases network efficiency. Trunk congestion is almost entirely correlated with the duration of transfers. Most transfers consume a fixed amount of data. Reducing the duration by increasing the rate at which a customer can transfer that data greatly benefits the choke points.

Comment Re:LAA is actually news for me! (Score 1) 46

Interfering is not the same as contending. The signalling aspect of the protocols play decently well together. What doesn't play well is the back-off mechanism. Unless they use the exact same algorithm with the exact same timing, one is going to dominate the other in the presence of congestion.

Comment Re:Crypto - Yes / Bitcoin - No (Score 1) 79

A single bitcoin transaction can process around 1,000 transfers. But my limited understanding is the transfers chosen are based on the client crunching the hashes and is generally based on bids. If you don't bid enough, it may be a while before your transfer is processed. At one point in the past, I heard about weeks long delays.

Comment Re:In before... (Score 1) 29

Many years back I was reading about high density pixel cameras where a single logical pixel was actually composed of several physical pixels, each collecting slightly different information, like phase timing. This allowed them to do cool thing computationally like bringing the entire photo into focus.

As for the people talking about "it's smaller than the frequency". I'm not sure about available products, but there are micro-meter antenna in labs that can not only receive data from frequencies magnitudes larger, but they can ignore nearly all other interference. They've already proved single atom antenna that can work with virtually any frequency.

Comment Re:And Nothing Of Value Was Lost (Score 1) 157

Only basic caching is feasible this way. But you have the issue of HTTPS and not being able to see a customer's traffic. Transparent caching should not be passively done by a 3rd-party. We would need caching to be handled by the clients. Essentially a way for my browser to find a CDN and query it for certain data. I've done some reading on HTTPS caching and the biggest complaint is that most data cannot be cached. People getting something like sub 10% cache hit rates with terabyte sized caches. The largest percentage of objects are static, but the largest number of requests are dynamic. Clients already cache most static content already seen.

Then you have the Netflix and Youtubes of the internet. You can't do simple caching. Netflix used to do this in the early days but quickly found their caches getting thrashed. In their situation they pin their caches and use algorithms to figure out what data to forcefully load into the caches. They re-calculate their caches once per day.

Comment Fundamentally flawed (Score 1) 79

Making all search results equal is antithetical to what a search engine is. The purpose of a search engine is to be biased. There are many different biases that can be used. Popularity being a commonly thought one. As a person who does a decent amount of research, I don't care about popular, I want useful. But how would I even go about defining that in a searchable way? The most useful information might be completely unpopular.

There's a huge issue that Google products are already popular and decently useful. It wouldn't be difficult to game any regulation by tweaking the important of different factors in such a way that Google products would "fairly" float to the top. Unless the regulators want to get into the business of deciding which factors are most important and forcing all search engines to ever only use those factors, their only other avenue is the end result.

Even then, you can't have "fair". Lets say there's 50 popular cellphones in the market right now. If I search for a cellphone to buy I could expect dozens of results per cellphone. To make things "fair", and give all of the cellphones their screen time in the top 10 results, you'd have to randomize the ordering. As an end user, I would hate this. The only other option is to only show the same 10 results, which means not all 50 will get their "fair" share.

Slashdot Top Deals

Anything free is worth what you pay for it.

Working...