Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
Compare cell phone plans using Wirefly's innovative plan comparison tool ×

Comment Re:problems, lol (Score 1) 221

That's fine, if the goal for the language is to whither. 10 years ago, I'd have recommended learning C and giving C++ a wide berth. I started new projects written in C. Now I'd recommend avoiding C for anything where there is another option. If a project is already written in C, I'd consider using C++ for new code and gradually migrating rest.

If the goal is to provide a good portable systems programming language then C is no longer succeeding.

Comment Re:Java? (Score 2) 346

For garbage collectors, I'll agree (as long as, by Java, you mean MMTk on Jikes RVM and not OpenJDK). For JITs... no. CoreCLR is a lot nicer. It supports nested JITs with fallback, so you can add a new JIT easily and have it bail to another one when it can't handle a particular construct. This makes incremental development and research prototypes that focus on a specific area both a lot easier than anything I've seen in a JVM. Modifying the Jikes RVM JIT is horrible (actually, the Jikes RVM code in general is fragile and flakey - MMTk isn't actually good, it's just that it doesn't really have any less-buggy competition).

Comment Re:Java? (Score 1) 346

Exactly what I was going to say. Java is good at cross-platform GUIs if your idea of a good cross-platform GUI is one that looks and feels the same on all platforms. A good GUI, however, is one that integrates with the host platform and matches all of the platform's human interface guidelines. Java GUIs don't do this. AWT aimed to, but it was deprecated in favour of Swing. Swing implements everything in Java, with pluggable looks and feels, but the looks and feels never quite match the platform. SWT thinly wraps the host windowing toolkit and works fine as long as your host system is win32, otherwise it has a bunch of impedance mismatches and ends up leaking CPU.

Comment Re:Other IM services (Score 1) 185

There are a couple of possible explanations:

As others have pointed out, Twitter does engage in censorship, which might make it ineligible for safe harbour provisions (which require that you do not actively take a role in the content of the communication that you host).

They are a company that doesn't have as much experience in litigation. Yahoo!, Microsoft, and AOL have all been involved in enough lawsuits that they keep a warehouse full of lawyers to airdrop on anyone with a stupid-looking lawsuit. Twitter is big enough to have enough money to be a good target, but not experienced enough to necessarily be a particularly tough opposition on court.

Comment Re:Next up O'Google (Score 1) 205

The problem is that they don't bring many jobs and the ones that they do are low-skill, low-pay. For example, Apple runs a call centre and a distribution centre in Ireland. Callcentre employees are just reading through a script, the shipping centre is moving boxes around. They're not bringing the engineering and R&D jobs that come with high salaries that translate to higher income tax revenues and knock-on benefits in the local economies from increased spending.

Comment Re:Moronic Subject for an Article (Score 2) 221

Java isn't a bad language. It's a constrained language, but in general it's constrained in a good way. It may make it difficult to write the best solution, but it makes it impossible to write the ten worst solutions and easiest to write a not-too-bad solution to any given problem. It also strongly encourages modularity and provides tools for reducing privilege for parts of a program so that you don't need to trust all programmers in your address space equally. It's certainly not the best tool for all jobs, but if you have a complex business application that you want to support for a long time with relatively high programmer turnover, it's far from the worst tool.

Comment Re:It's not a popularity contest (Score 1) 221

That's a good reason for providing a C interface, but there's no reason not to use C++ (or Objective-C) inside your library. That said, if you provide a C++ interface that uses smart pointers and conveys explicit ownership semantics, then it's much easier to machine generate interfaces for other languages (even for C) that care about memory management.

Comment Re:problems, lol (Score 4, Informative) 221

The real problem with C is that WG14 sat on its fingers between 1999 and 2011. C11 gave us:

_Generic - Useful for a few things (mostly tgmath.h, which I've rarely seen used in real code because C's type promotion rules make it very dangerous, but it was quite embarrassing that, for 12 years, the C standard mandated a header that could not be implemented in standard C). Existing compilers have all provided a mechanism for doing the same thing (they had to, or they couldn't implement tgmath.h), but it was rarely used in real code. Oh, and the lack of type promotion in _Generic makes it annoyingly verbose: int won't be silently cast to const int, for example, so if you want to handle both then you need to provide int and const int cases, even though it's always safe to use const int where an int is given as the argument.

_Static_assert - useful, but most people had already implemented a similar macro along the lines of:

#define _Static_assert(x) static int _assert_failed_ ## __COUNTER__ [x ? 1 : -1];

This gives a 1 or -1 element array, depending on whether x is true. If x is true, the array is optimised away, if x is false then you get a compile-time failure. _Static_assert in the compiler gives better error diagnostic, but doesn't actually increase the power of the language.

And then we get on to the big contributions: threads and atomics. The threading APIs were bogged down in politics. Microsoft wanted a thin wrapper over what win32 provided, everyone else a thin wrapper over what pthreads provided. Instead, we got an API based on a small company that no one had ever heard of's library, which contains a clusterfuck of bad design. For example, the timeouts assume that the real-time clock is monotonic. Other threading libraries fixed this in the '90s and provide timeouts expressed relative to a monotonic clock.

The atomics were lifted from a draft version of the C++11 spec (and, amusingly, meant that C11 had to issue errata for things that were fixed in the final version of C++11). They were also not very well thought through. For example, it's completely permitted in C11 to write _Atomic(struct foo) x, for any size of struct foo, but the performance characteristics will be wildly different depending on that size. It's also possible to write _Atomic(double) x, and any operation on x must save and restore the floating point environment (something that no compiler actually does, because hardly anyone fully implements the Fortran-envy parts of even C99).

In contrast, let's look at what WG21 gave us in the same time:

Lambdas. C with the blocks extension (from Apple, supported by clang on all platforms that clang supports now) actually gives us more powerful closures, and even that part of blocks that doesn't require a runtime library (purely downward funargs) would have been a useful addition to C. Closures are really just a little bit of syntactic sugar on a struct with a function pointer as a field, if you ignore the memory management issues (which C++ did, requiring you to use smart pointers if you want them to persist longer than the function in which they're created). C++14 made them even nicer, by allowing auto as a parameter type, so you can use a generic lambda called from within the function to replace small copied and pasted fragments.

Atomics, which were provided by the library and not the language in C++11. Efficient implementations use compiler builtins, but it's entirely possible to implement them with inline assembly (or out-of-line assembly) and they can be implemented entirely in terms of a one-bit lock primitive if required for microcontroller applications, all within the library. They scale down to small targets a lot better than the C versions (which require invasive changes to the compiler if you want to do anything different to conventional implementations).

Threads: Unlike the C11 mess, C++11 threads provide useful high-level abstractions. Threads that can be started from a closure (with the thread library being responsible for copying arguments to the heap, so you don't have the dance of passing a pointer to your own stack and then waiting for the callee to tell you that it's copied them to its stack). Futures and promises. Locks that are tied to scopes, so that you don't accidentally forget to unlock (even if you use exceptions).

Smart pointers. C++11 has unique_ptr and shared_ptr, for exclusive and shared ownership semantics. unique_ptr has zero run-time overhead (it compiles away entirely), but enforces unique ownership and turns a whole bunch of difficult-to-debug use-after-free bugs into simple null-pointer-dereferences. shared_ptr is thread safe (ownership in the presence of multithreading is very hard!) and also allows weak references.

C++14 and C++17 both made things even better. I've already mentioned generic lambdas in C++14, C++17 adds structured binding (so you can return a structure from a function and in the caller decompose it into multiple separate return values). It also adds optional (maybe types), any (generic value type) and variant (type-safe union) to the standard library. Variant is going to make a lot of people happy.

With C++11, the language moved from being one I hated and avoided where possible, to my go-to language for new projects. With a rich regular expression library, threads, smart pointers, and lambdas, it's now useable for things that I'd traditionally use a scripting language for as well (and an order of magnitude faster when crunching a load of data). In contrast, C has barely changed since the '80s. It still has no way of doing safe and efficient generic data structures (you either use macros and lose type safety, or you use void* and lose type safety and performance). It still has no way of expressing ownership semantics and reasoning about memory management in multithreaded programs. The standard library still doesn't provide any useful data structures more complex than an array (not even a linked list), whereas C++ provides maps and sets (ordered and unordered), resizable and fixed-size arrays, lists, stacks, queues, and so on.

C11 didn't really address parallelism and definitely didn't address reliability or security. Microsoft Research's Checked-C provides some very nice features, but they initially prototyped them in C++ where they could implement them all purely as library features.

Comment Re:Is he going for irony, here? (Score 4, Informative) 213

In terms of Linux, it's not classical security through obscurity, it's security through diversity. One of the reasons Slammer was so painful a decade ago was that most institutions had a Windows monoculture. The time between one machine being infected on your network and every machine on your network being infected was about 10 minutes (a fresh Windows install on the network was compromised before it finished running Windows Update for the first time). If you'd had a network that was 50% Windows and 50% something else, then it would only have infected half of your infrastructure and you'd have been able to pull the plug on the Windows machines and start recovery. It's possible to write cross-platform malware, but it's a lot harder (though there's some fun stuff out of one of the recent DARPA programs writing exploit code that is valid x86 and ARM code, relying on encodings that are nops in one and valid in the other, interspersed with the converse). Writing malware that can attack half a dozen combinations of OS and application software is difficult.

This is why Verisign's root DNS runs 50% Linux, 50% FreeBSD and of those they run two or three userland DNS servers, so an attack on a particular OS or particular DNS server will only take out (at most) half of the machines. Even an attack on an OS combined with an independent attack on the DNS server will still leave them with about a quarter functional, which will result in a bit more latency for Internet users, but leave them functioning.

Comment Re:AV only helps if you are bad (Score 5, Interesting) 213

You got lucky. There are two problems with most Antivirus software:

Most of them still use system call interposition. They're vulnerable to a whole raft of time-of-check to time-of-use errors, so the only part that actually catches things is the binary signature checking, and that requires you to install updates more frequently than malware authors release new versions - it's a losing battle.

They run some quite buggy code in high privilege. In the last year, all of the major AV vendors have had security vulnerabilities. My favourite one was Norton, which had a buffer overflow in their kernel-mode scanner. Providing crafted data to it allowed an attacker to get kernel privilege (higher than administrator privilege on Windows). You could send someone an email containing an image attachment and compromise their system as long as their mail client downloaded the image, even if they didn't open it. It's hard to argue that software that allows that makes your computer more secure.

Comment Re:Laissez Faire Capitalist Here... (Score 1) 204

Direct government control isn't required. The good capitalist solution is not that different to the socialist solution: make homeowners own the last mile (fibre from your house to the cabinet is yours, though you may jointly own some shared trunking with your neighbours). The connections from the cabinets should be owned by public interest companies, with the shares owned by the homeowners. Providing Internet connectivity to the network would be something that you'd open to tender by any companies (for-profit or non-profit) that wanted to provide it.

The situation in most of the USA is that it's been done using the worst possible mixture of laissez-fair capitalism and central planning. Vast amounts of taxpayer money have been poured into the infrastructure, yet that infrastructure is owned by a few companies and they have geographical monopolies and are now owned by their customers, so have no incentive to improve it. Oh, and regulator capture means that it's actually illegal to fix the problem in a lot of places. You can provide an incentive in several ways:

  • Tax penalties or fines for companies that don't improve their infrastructure. Big government hammer, and very difficult to enforce usefully.
  • Try to align the ownership of the companies with their customers. Companies have to do what their shareholders want and if their shareholders want them to upgrade the network because they're getting crap service then they will.
  • Ensure that there's real competition. This is difficult because it's hard to provide any useful differentiation between providers of a big dumb pipe and the cost for new entrants into the market is very high.

Comment Re:BS (Score 1) 175

Android and iOS have very different philosophies. Android devices aim to be general-purpose computer, iOS devices aim to be extensions to a general-purpose computer. I have an Android tablet and an iPad, and I find I get a lot more use from the iPad because it doesn't try to replace my computer. There's a bunch of stuff that I can do on the Android tablet that I can't do on the iPad, but all of it is stuff that I'd be better off doing on my laptop anyway (with the one exception of an IRC client that doesn't disconnect when I switch to a different window). I still use Android for my phone, because OSMAnd~ (offline maps, offline routing, open source, and good map data) is the killer app for a smartphone for me and the iOS port is far less good.

Comment Re: The anti-science sure is odd. (Score 1) 695

Alas, it's a shame that it doesn't mean anything. The point here is that the Earth has undergone many shifts in its climate, sometimes in a startlingly short period of time

Except that the difference in temperature between the peak of the Medieval Warm Period and the bottom of the Little Ice Age were significantly smaller than the difference between the current temperature and the bottom of the Little Ice Age. The last time we saw an increase in temperature equivalent to the last 200 years it was over a period of tens of thousands of years.

Go and read a news story about an area of science that you know about and compare it to what the original research actually claimed. Now realise that press reports about climate change are no more accurate than that and go and read some of the papers. The models have been consistently refined for the last century, but the predictions are refinements (typically about specific local conditions and timescales), not complete reversals. Each year, there are more measurements that provide more evidence to support the core parts of the models.

Oh, and I don't think the words objectivist or dualistic mean what you think they mean. You can't discard evidence simply by throwing random words into a discussion.

Comment Re:Standard protocol (Score 2) 103

Considering that the entire selling point behind Signal is that it's supposed to be resistant to "an adversary like the NSA," I would think their ability to trivially associate a key with a real person would kind of turn that on its head.

Any global passive adversary can do traffic analysis on any communication network. Signal's message encryption should stand up against the NSA unless there are any vulnerabilities in the implementation that the NSA has found and not told anyone about or unless they have some magical decryption power that we don't know about (unlikely). Protection of metadata is much harder. If you connect to the Signal server and they can watch your network traffic and that of other Signal users, then they can infer who you are talking to. If they can send men with lawyers, guns, or money around to OWS then they can coerce them into recording when your client connects and from what IP, even without this.

In contrast, Tox uses a DHT, which makes some kinds of interception easier and others harder. There's no central repository mapping between Tox IDs and other identifiable information, but when you push anything to the DHT that's signed with your public key then it identifies your endpoint so a global passive adversary can use this to track you (Tox over Tor, in theory, protects you against this, but in practice there are so few people doing this that it's probably trivial to track).

No system is completely secure, but my personal thread model doesn't include the NSA taking an active interest in me - if they did that then there are probably a few hundred bugs in the operating systems and other programs that I use that they could exploit to compromise the endpoint, without bothering to attack the protocol. I'd like to be relatively secure against bulk data collection though - I don't want any intelligence or law enforcement agency to be able intercept communications unless at least one participant is actively under suspicion, because if you allow that you end up with something like Hoover's FBI or the Stazi..

Slashdot Top Deals

You can not get anything worthwhile done without raising a sweat. -- The First Law Of Thermodynamics

Working...