Times have changed - when I did my computer science degree, most of the students were at the geeky end of the spectrum and were there because that's what they were really into. Compare to the present-day cross section of computer science students: most of them are there because computers are seen as a good career. The extra-curricular interest is giving way to people who just want a job.
I disagree. People like you and me merely congregated together and ignored the others. (Also, you went to school in Wales. Different world.) My above statement was about "my most IT-savvy freshman colleagues," which is to say under a dozen total (and I was friends with all of them). I'd say about 75% of my freshman peers in CS declared the major for its salaries and/or a passion for video games. I imagine today's breakdown is roughly the same, more due to the fact that most freshmen are blank slates than any measure of incoming freshman tech savviness (which brings us back on topic...).
I even chose CS over History and other things I was roughly equally interested in because it better mapped to a better career. (Also because my grades were stronger in CS and advanced math, but that had more to do with my odds of acceptance.) I had lots of classmates who were horrible at math but had chosen the program for the money it represented. Most of them failed out and migrated to the business program (which was less academically rigid at that school at that time; these days, they'd fail there too).
When I went off to college, many of my most IT-savvy freshman colleagues were versed in networks and system administration because they had run the computer labs of their high schools. Some of them had been caught cracking or otherwise mucking about in ways that the school staff lacked the ability to revert and been forced to clean up after themselves, others saw messes and volunteered to help out. They got paid and had responsibilities. From this new perspective, they learned the "damage" students could deal and then had the hands-on task of cleaning it up. I wish I had had that opportunity.
In this sort of environment, especially given the ubiquity of virtual machines and virtual networks, a well-facilitated capture the flag (CTF) event should be easy enough to facilitate. Even without virtualization (or even any lab at all), any school could reach out to a local hacker group and ask them to host a CTF event. The cost of scrounging up a bunch of computers and networking equipment for a one-shot event should be decently low given the spare parts in your typical hacker group or Linux users group. Maybe the school or city could even provide a budget for the event.
"The island is 1,250 km (780 mi) long and 191 km (119 mi) across its widest points and 31 km (19 mi) across its narrowest points.[1] The largest island outside the main island is the Isla de la Juventud (Isle of Youth) in the southwest, with an area of 2,200 km2 (850 sq mi)." http://en.wikipedia.org/wiki/Geography_of_Cuba
Sorry, Slashdot killed my squared symbol and I missed it in the content preview. Wikipedia says Cuba is 110 km^2 in area.
If 802.22 can cover a 100km radius (200km diameter), width isn't an issue. The 1,250km length would need only seven full-powered 802.22 antennae to provide a "backbone" across the main island (1250/200 = 6.25). Maybe each of those can have either a satellite uplink or a wired connection. Surely, another few hundred cheaper and/or lower-powered antennae (perhaps 802.11y or 802.11af?) would be able to saturate valleys and high density areas.
If Cuba built its own onion routing network (perhaps using Tor software though not connected to the Tor network), then each satellite dish or other internet connection would automatically be able to facilitate connectivity for the rest of the network. No need to wire anything (except some of the exit nodes), this can all happen over wifi.
Don't forget that 802.11af, 802.11y, and 802.22 have ranges measured in miles (802.22 can cover 100km). Blanketing an island of 110km would still take a good number of antennae (especially given the dead zones created by dense buildings in cities), but at a governmental budget scale, it seems quite feasible.
Thanks for taking the time for this, Soulskill (et al).
I really missed the ability to set comment thresholds in the GET of an article (removed in the last major UI upgrade). I have a lot of friends that do not frequent slashdot, and when I link them an article that I want them to read the better comments of, it needs to be at a threshold they'll tolerate (typically, 5/4 for full/abbrev if there are enough comments).
I have other suggestions as well, but getting comments right is by far #1. I can fix the rest with Greasemonkey.
I see this as inevitable, really.
If we want autonomous vehicles to be maximally efficient, this has to happen. They move out of the way for a police officer or for somebody who has to change their route at the last minute and get to an exit from the opposite lane. More importantly, self-driving cars can cluster together. Take India for example; they drive 4-5 cars wide in lanes marked for 3. Highly efficient, but highly unsafe for human operators.
This doesn't have to invade our privacy or be implemented insecurely. If range is limited and details forgotten when they become irrelevant, then we're fine. If cars generate random IDs, there's no way to collate them together over time (well, without existing technology like reading plates or RFID).
Simple: Render the list randomly each time it is viewed (server side, please). It should average out to being a fair represenation.
Maybe pick a SEO-friendly order and statically serve it to the search engines (e.g. by identifying the Googlebot User-Agent).
Wine: an emulator of the win32 API+ABI on POSIX+X. WinXP/Vista/7/8: an emulator of the win32 API+ABI on NT.
Neither is native in this sense.
That reminds me of this gem from the Cygwin FAQ (through Dec 2009, since removed for political correctness):
Windows 9x: n. 32 bit extensions and a graphical shell for a 16 bit patch to an 8 bit operating system originally coded for a 4 bit microprocessor, written by a 2 bit company that can't stand 1 bit of competition.
The wayland repository continues to mature and moves slowly. This cycle again only saw a few wayland changes, most of which where fairly unexciting:
- SHM Buffer SIBGUS protection. We added and couple of utility functions to help compositors guard against broken or malicious clients who could truncate the backing file for shm buffers and thus trigger SIGBUS in the compositor (Neil Roberts).
- Subsurfaces protocol moved to wayland repo and as such promoted to official wayland protocol (Pekka Paalanen).
- wl_proxy_set_queue() can take a NULL queue to reset back to default queue. (Neil Roberts).
- A few bug fixes, in particular, I'd like highlight the fix for the race between wl_proxy_create() and wl_proxy_marshal().
- A few scanner error message improvements and documentation tweaks and polish.
I'm hoping the Maui Project (which uses Wayland) can continue to gain momentum as Wayland does and that it becomes a viable option in the next few years.
From what I can tell, GCC is still the better compiler. It is better supported (lots of things won't work on clang or llvm-gcc) as well. LLVM/Clang tends to compile a bit faster (which doesn't matter unless it's an order of magnitude) while the binaries that GCC produces tend to run more efficiently. There's a nice benchmark comparing GCC 4.7 to Clang 3.1 (in Apr 2012) which demonstrates this.
I'm sure LLVM has been well designed and perhaps can do better with JIT and similar concepts (which you'd have to compare to e.g. GNU Lightning), but GCC is still king. Stallman's complaint is that it's getting attention and therefore it may become better than GCC over time, which he argues would be bad for developers on the assumption that eventually a game-changing feature is released for LLVM that is nonfree and then everybody will be forced to pay for it, a fate that the GPL'd GCC cannot suffer.
Happiness is twin floppies.