It seems that the news industry believes we cannot do without them, and that we must pay for the privilege of keeping them in business.
It's quite hysterical. They're in for a big surprise.
It seems that the news industry believes we cannot do without them, and that we must pay for the privilege of keeping them in business.
It's quite hysterical. They're in for a big surprise.
would be to know where Debian is heading.
I'd very much like to support a distro which has clearly stated technical and societal values which mirror my own, but it's hard to distinguish exactly what Debian's values are anymore. Merely embracing GPL licensing and its values doesn't really tell you a lot, because even code with ethically questionable goals can be GPL.
Perhaps it's time for a Debian Conference in which "What do we stand for?" could be addressed and made a little more specific.
Actually, there are plenty of rational comments in this thread, but they've all been moderated down to -1.
Why all non-favourable comments have been greeted with "nuke from orbit" is an interesting question, but it's clear that in this thread, rational discussion and dissenting opinion is not welcome.
Slashdot seems to be getting more and more like this. I've been here a long time, but I can't really say I know why it's happening. I don't. Maybe the art of nuanced discussion is disappearing from public spaces in general.
'Android has now irreversibly destroyed Java 's fundamental value proposition as a potential mobile device operating system
Java is a programming language, not an operating system. Examples of operating systems are Linux and Unix.
Nothing could have "destroyed Java's fundamental value proposition as a potential mobile device operating system" because the value proposition of Java as an operating system is zero, and always has been. It's like the value proposition of an orange to be an apple.
Oracle's nonsensical claim might be merely a case of lawyers or managers showing their ignorance of the computing subject domain or just being sloppy with their terminology, which is not uncommon. However, it gets worse.
A proprietary software package may have a calculated expectation of market share and profit if there is no competition, but this is not the case with programming languages because they always have competition from countless other languages. It is especially not the case with open source programming languages because they typically enjoy multiple implementations, and these make captive markets almost impossible to maintain.
It seems therefore that Oracle's market expectations were based on a flawed analysis.
That mistake would have made any market expectations unsafe, but any expectations were dealt a further blow by Oracle's highly abusive attempt to copyright SSO in their litigation against Google. This must have alienated practically everybody who knows anything about programming, and the likelihood is high that many Java programmers who had other languages available must have abandoned Java like the plague to avoid potential SSO copyright liability.
In other words, if anyone killed off interest in Java, it was probably Oracle themselves.
Control Theory is applied mainly to electronic systems, but it's equally applicable to all systems everywhere, with no exception. That includes networking, and it even governs human systems.
It's a truism in Control Theory that a system without negative feedback is a system that is out of control. All non-trivial systems without negative feedback head towards an uncontrolled state on the slightest perturbation of initial conditions.
Email is one such system. It was designed without negative feedback back in the early days of the academic Internet before malicious actors appeared on the scene. Because there is no "cost" associated with sending an email, the system went out of control --- the primary effect of that is spam. (This "cost" has nothing to do with money.)
In Control Theory terms, "cost" is any control metric that tracks an undesired effect and reduces that effect when applied to its cause. One of the most universal undesired effects is resource consumption, and that's directly applicable to the email problem because many kinds of resources are used up by spam when it arrives at MTAs and at end-user mailboxes --- examples are CPU time, storage space, network bandwidth, end-user time, and many other things. They're all resources, and spam is the direct result of the spammer feeling no "cost" when he consumes other people's resources. There is no negative feedback being applied to his posting of spam.
"Cost" in the control theoretical sense could be many things when applied to email, for example a slowdown in the spammer's ability to post his next email proportional to the rate of sending and to the number of recipients. There are dozens of possible ways to make a spammer feel a "cost" as negative feedback for his actions, many of them leaving normal mail users entirely undisturbed by the negative feedback. Unfortunately email has none of these control methods available, and it probably never will because it's too late in the day.
One day however, a new asynchronous communication protocol will be designed to replace SMTP. It must be designed with a mechanism for negative feedback integral to the protocol and non-optional, or else the spam problem will appear again, sure as night follows day.
Note that we have many other systems out of control in computer networking, it's not just email. For example, there is no negative feedback applied to rampant abuse of user-side scripting by web pages. Web developers feel no cost regardless of how much end-user CPU, storage, or network bandwidth they employ, and since there is no negative feedback applied to their over-use, browsers typically have their CPUs pegged at 100% and the Web has turned to molasses. As techies we try to control the Web excesses with NoScript (for example) just as we try to control spam with SpamAssassin, but these are just fighting symptoms. You can't cure a disease by fighting symptoms.
This is a universal truth. No negative feedback spells trouble ahead.
There's a problem with mobs: they gang up and lynch anyone who isn't part of the mob.
This doesn't happen just in westerns. It's been happening since the dawn of time, because it's a natural property of crowds: the least able thinkers are the ones most likely to be swayed by group-think. And one of the strongest group-think arguments is "Outsider, danger to group, kill it", which is a very effective survival M.O. for life below a certain threshold of intelligence. The combination of these two aspects of mob behaviour is predictable.
That makes TFS and TFA a bit of an exercise in wishful thinking. Development by mobs could (in theory) work well, but only in the very unlikely situation that the mob has a statistically improbable makeup in which independent thinkers are dominant and are also well informed and technically experienced. Unfortunately that scenario lies in "pigs will fly" and "hell freezes over" territory.
The perfect number in team programming is two people of similar experience, because then they can't gang up and form a lynch party. If they don't immediately agree then it creates a stalemate which can be broken only by rational explanation / defence or by terminating the pairing. It's an ideal situation, yet not too hard to arrange.
Mobs don't really have a place in intellectual endeavours, and programming is one of those.
What will is miss?
On IPv4, you won't be able to reach the endpoints of millions of computers and other devices that have IPv6 addresses now (eg. Android always looks for IPv6 connectivity on startup). This is relevant not only in the east where new IPv4 address blocks are no longer available, but also here in the west where IPv6 deployment is continuing and accelerating.
Your "What will I miss?" question is pure IPv4 thinking, because in IPv4, NAT makes almost everything except static public servers inaccessible as individual device endpoints are typically hidden. That's a severe limitation in IPv4, and you've become conditioned by it and so you're expecting a reply involving a list of websites. It's incredibly narrow thinking.
With IPv6, a user on any random portable device can share an object with you directly, not needing to upload it to a public website first. You could be chatting with them on IRC and they write "Hey look at this wierd stuff I'm seeing on my phone", and you just point a browser or image app at their IPv6 address and bingo, you see whatever they're making available, live. You can't do that with IPv4 because there aren't enough IPv4 addresses available for every device to have one, and connections to arbitrary endpoints are typically blocked by NAT anyway.
That's why in IPv4 people have to upload stuff they want to share to public websites first, which is annoying and limits the content protocols that can be used. Applications can be much more versatile and immediate in IPv6, and you will be missing all that directly-available content if you can't reach the IPv6 endpoints of devices. It can't be done on IPv4.
What are the beneficial FEATURES to dumb end users?
I'll bite, as that's a perfectly reasonable question. OK, no technical info at all in the following list, the technical answers are given in detail elsewhere.
Benefits of IPv6 for dumb (meaning non-technical) END USERS:
- All protocols work over IPv6, unlike the breakage on IPv4.
- IPv6 "just works" without user setup, great autoconfiguration.
- As many public IP addresses as you want for devices on IPv6.
- Safer because network security is built into IPv6, not optional.
- Add IPv6 to see the whole Internet, not just the IPv4 part.
- New quality of service features for stutter-free video or gaming.
- Faster networking for a better all-round user experience.
Each of these 7 benefits has a technical reason for which the corresponding improvements were added to IPv6 by design to improve on IPv4. These benefits are available to everyone, and non-technical users don't need to understand the details to enjoy the benefits.
The official "switch-on for good" of IPv6 a year ago was entirely seemless in my experience. There wasn't anything to fix, as nothing was broken, and IPv6 autoconfiguration handles everything so there isn't even any setup involved, it just works. This simplicity will be a boon for non-technical users once the IPv6 rollouts gain steam.
Unfortunately the ISPs are still dragging their feet and so public rollout is slow, but it's an always upward trend, and the adoption curve is close to exponential so IPv6 will be ubiquitous before long. So many ISPs are currently planning their rollouts that there's going to be a sudden upsurge when they finally appear.
People shouldn't talk about switchover to IPv6 though, that's not how it works. IPv4 and IPv6 networks run together side by side, and you use both together. Your application (eg. browser) generally picks IPv6 if your destination is accessible on that network, or else it falls back to IPv4. This is all automatic of course. It's better described as a switch on of IPv6 by your ISP followed by your gradual increasing use, not a switchover. There is no plan to switch off IPv4. The last remnants of IPv4-only equipment could still be around and operational for decades ahead.
IPv6 works so well that I recommend everyone to get on it as soon as they can. You'll be able to see 100% of the Internet, whereas if you don't have IPv6 then you're only seeing a part of it. IPv4 is by far the larger part for now of course, but it's not all of it, and the parts you can't reach are growing daily.
Happy First Anniversary of the official turn-on, IPv6!
The Internet was founded upon the idea of open interoperation between all endpoints and federation between different instances of the same service protocol (think of SMTP and globally interoperating MTAs). These concepts were so fundamental that they are mentioned explicitly in the IETF Mission Statement as their central goal.
Then Big Business came along, and they didn't like the concept of a level playing field of unhindered interoperation and federation. Now almost every large corporation is trying to fence off their little corner of the Internet into a private realm which they guard jealously. Other companies are denied interoperation unless they pay up (or it's denied entirely), and federation between like services is virtually unknown. There is no "Facebook service" which anyone can install and then be able to federate their content to and from Facebook as peers.
Virtually all of the megacorps today are behaving this way: Facebook, Google, Amazon, Yahoo, Apple, Microsoft, and so on. They all hate the open Internet, and have closed it off at the application layers of the protocol stack so that you have to be an enrolled member of their private realm to participate. The closing of APIs is par for the course as they don't want interoperation, and federation even less. TFS is spot on.
At least we still have federated SMTP and unrestricted search engines, although probably that's only because they're data mining our email and search queries. It's no longer the open Internet we once had, but more a system of feudal lords and their private domains, and everyone else is a peasant.
It's a severe regression of Internet utility, and it's of benefit only to them.
Does this give us anything Raspberry Pi didn't?
If successful, it would give the ARM world a PC-like, vendor-neutral standard architecture, and so it would counteract the horrible balkanization of ARM communities by every manufacturer's boards being different.
Even if this doesn't succeed, standardization is a very worthwhile goal for ARM (just as it was for x86 PCs), and it's quite important that a broadly funded organization has recognized the need. It will also usher in the days of ARM64, at last.
There is one glaring omission in the spec though, the lack of Ethernet. No Ethernet means extremely limited sales outside of mobile, and at the HiKey's price of $129 it has to be gigabit Ethernet at that.
Free and Open Source Software (FOSS) has achieved immense success worldwide in virtually all areas of programming, with only one major exception where it has made no inroads: FPGAs. Every single manufacturer of these programmable devices has refused to release full device documentation which would allow FOSS tools to be written so that the devices could be configured and programmed entirely using FOSS toolchains.
It's a very bad situation, directly analogous to not being able to write a gcc compiler backend for any CPU at all, and instead having to use a proprietary closed source binary compiler blob for each different processor. That would have been a nightmare for CPUs, but fortunately it didn't happen. Alas it has happened for FPGAs, and the nightmare is here.
The various FPGA-based SDR projects make great play about being "open source, open hardware", but you can't create new bitstreams defining new codecs for those FPGAs using open source tools. It's a big hole in FOSS capability, and it's a source of much frustration in education and for FOSS and OSHW users of Electronic Design Automation, including radio amateurs.
If FPGAs are going to figure strongly in amateur radio in the forthcoming years, radio amateurs who are also FOSS advocates would do well to start advocating for a few FPGA families to be opened up so that open source toolchains can be written. With sufficient pressure and well presented cases for openness, the "impossible" can sometimes happen.
Because there is a right well to tell fictional stories?
There isn't a single right way because there are infinite possible futures, and it's reasonable to assume that inventive SciFi authors would want to explore that huge space of possibilities. There are unlimited right ways.
Nor is there a single wrong way, but if all authors narrow their horizons to describing only simplistic futures in which most cultural elements remain unchanged then clearly there is a problem of deliberate myopia which will inevitably lead to a poverty of novel material.
It's a bit like surrounding oneself with yes-men --- it doesn't promote pushing the envelope and expanding the mind in new directions. In the context of SciFi, if cultural elements are shackled to present-day norms then it creates a literary monoculture with very few interesting elements. Even worse, it's factually incorrect, since we know that cultures change strongly with time.
It is acceptable to be factually incorrect in fiction, but when a whole genre that is predicated on gazing into the future knowingly avoids addressing cultural change then there is indeed a problem, and a very big one. SciFi readers deserve better than just present day stories adorned with spaceships.
The Permaculture community and advocates of companion planting have been around for decades preaching this same message, that plants grow better in messy complimentary families instead of in tidy rows of monoculture in which everything else is considered "weeds" and exterminated.
It's great to see youngsters getting rewards for bringing this message to the public eye, countering Monsanto's advocacy for broad-spectrum herbicides that are effectively killing off the biosphere with each passing year. Nature is amazingly productive when allowed to do her thing, instead of undermined by highly destructive profit-led myopia.
One of my favorites out there today is the A10-OLinuXino-LIME.
The Beagle Bone was good in its day, but it is kind of over the hill. The processor is underpowered compared to other ARMs
Just to be clear, the A10-OLinuXino-LIME, BeagleBone white and BeagleBone Black all contain a single Cortex-A8 core, and the TI AM3359 runs at the same 1GHz speed in the BBB as the Allwinner A10 does in the LIME.
The original BeagleBone (white) ran its AM3359 at 720MHz so its CPU performance is a bit less, but the BeagleBone Black (BBB) superceded it a year ago and at a much lower price. As a result, the reasonable current-day comparison is between A10-OLinuXino-LIME and BBB, and on CPU power their similar speed Cortex-A8 cores make them pretty much identical.
I have all of these boards and many other similar ones, and my assessment is that BBB is much more capable for embedded projects because of its additional dual realtime 200MHz PRU cores (which are quite unrivalled), while the A10-OLinuXino-LIME is more suitable as an extremely low end desktop-style "computer" because of its dual USB2 host sockets and rather more capable MALI-400 GPU.
This assessment doesn't change when the just-released A20-OLinuXino-LIME is brought into the comparison, except that the dual Cortex-A7 cores in the A20 make it a far better general purpose "computer" than its A10 sibling for a mere 3 euro more in price.
I have a theory that it's impossible to prove anything, but I can't prove it.