"This Is the Way the World Ends" by James Morrow. It's a book about the aftermath of a nuclear war.. yeah I know there are lots of those, but this one is so incredibly bleak that it makes Neville Shute's "On The Beach" look upbeat..
This is the book that came to mind as well. In the story humanity is literally put on trial for the crime of nuclear war, prosecuted by the dead and the unborn generations. It is so surreal and dark that it would best be described as a nightmare or panic attack committed to paper.
The #1 aspect of Writer that is superior to Word is in handling floating inline images. It isn't rocket science, but Word seems intentionally designed maximally piss off the user:
1) Images don't actually end up where you drop them. You move it to where you want it on the page, then Word randomly decides to lay it out somewhere else.
2) Captions by default are separate from images, so you move the image and the caption stays behind. Worse, if you are editing text earlier in the document causing the image to move, the caption ends up in some other random location.
3) Images and captions in the body often end up wandering around the page and laid out overlapping the header or footer.
4) Sometimes you move an image or a caption, and it just vanishes. (It may not technically be "gone", but if you can't find it to click on it, it might as well be).
5) Anchoring to a specific page doesn't work if text stream position of the image isn't also on that page. Again, incredibly annoying if you are editing text earlier in the document.
6) Images are considered part of a paragraph for layout, sometimes resulting in half a page of whitespace on the previous page because Word randomly decided it can't fit both the paragraph and the image in the available space, and refuses to split the paragraph across pages.
7) If you click on an image and say "change picture" to replace the image with, say, an updated image of identical dimensions, it will forget that you had resized the image and force you to redo all the tweaking to sizing and layout you had already done.
Clearly, Word's image layout is stuck in 1995 because to actually fix it would break the ten billion Word documents already out there, but it is worth pointing out that LibreOffice has far saner and more predictable behavior in every case.
I wonder how many heart attacks have been caused by blood pressure spikes in frustration over Word's terrible, buggy, asinine layout algorithms?
The key here is "attack surface". Having relatively uninhibited access to low level graphics APIs that were not previously assumed to be public means there are probably lots of bugs with security implications. I wouldn't be surprised if graphics drivers eschew error checking in order to gain performance, but now malicious programmers can use that to crash the browser or OS. Shader compilers are also quite complex, and may present opportunities for specially crafted invalid programs to overflow buffers or otherwise screw things up. Security has always always always taken a back seat to performance in the graphics world, and it may take a while for the driver writers to come around.
Windows Phone 7 is Windows CE, not Windows 7. Possibly the Windows Phone 7 stack could run on (desktop) Windows 7, but that's up to Microsoft. I contend that Android is much more likely to scale up to the desktop than Windows Phone 7 ever will (given Microsoft's business history of protecting desktop WIndows from all competitors, often including other Microsoft products).
Android is Linux underneath, so there is no reason you couldn't embed it in a conventional Linux desktop stack, which is essentially what you are suggesting Microsoft should do. People have installed the Debian-ARM userspace on Android, which presumably includes the gcc toolchain (I don't know about the java and dalvik tools, though). I also believe there is no technical barrier to having multiple apps visible at the same time, it just hasn't really made sense until recently. Look at what the Notion Ink Adam tablet can do.
I agree that on larger screens (tablets, desktops) the Android UI needs a multiwindow or multipanel mode; for certain tasks you really need to have multiple applications open side by side.
The lack of "desktop style applications" is simply due to the fact that prior to Honeycomb, Android hasn't been focused on large screens where "desktop style applications" make sense. I'm sure iOS didn't have any "desktop style applications" prior to the release of the iPad.
I think the future is bright for scaling Android up; it has already overcome some of the key obstacles faced by conventional desktop Linux (lack of pre-installed devices, lack of mass market commercial apps) so if Android goes to the desktop (with the necessary UI enhancements) then users and developers will follow.
The android security model is fairly fine grained, certainly much more so than what we see on conventional desktop OS's, and has a pretty tall wall between apps. Note that the malware was not stealing user data from other apps, it is just a spambot, only stealing CPU cycles and bandwidth.
The main problem I have with the android security model is that the only recourse you have for a questionable app is to not install it in the first place. I'd prefer see the ability to selectively deny permissions, so you could specify that (for example) an app that requests a network connection be denied access. In this case, that would effectively neuter the spambot while possibly still being able to set wallpapers as the app is advertised to do. Sure, the app might just crash, but that would provide some feedback to the user as well (and cause you to uninstall it).
Unfortunately, a lot of apps probably ask for more permissions than they actually use due to poor Android documentation in describing which SDK functions require which permissions. In my experience, this leads developers to take a scattershot approach of adding permissions semi-randomly in an attempt to debug why their app is crashing with permissions errors (of course, there is little incentive to remove those unnecessary permissions). Also some permissions need to be further split up; a music app that needs to know when a phone call is coming in in order to pause playback should only need permissions to that particular event, it shouldn't have to request full access to make and receive calls. Because there isn't enough information to make an informed decision, this quickly causes even technical users to stop paying attention to the "required permissions" page in the android market.
Quick, we need to send them 1.5 million free-trial AOL CDs!
No, if there are 3 candidates, you have three votes -- you are in effect voting "yes" or "no" for each candidate.
Granted, if you don't approve of anyone it doesn't feel like your blank ballot is actually three votes. In order to explicitly register disapproval / none of the above, you'd probably need a "none of the above" option, or require some minimum % of ballots cast to win so that blank ballots do have the effect of making it harder for candidates to achieve some minimum level of support. But if no candidate is accepted, you have to have another round of elections and all the time and expense that entails, which is why you don't see this in practice.
DOS/Windows gave people more control over their computers. people had the software locally and could install anything they wanted. anytime.
same with my iphone. i have all the files local on my laptop. if apple pulls an app then i can still use it. all i do is add the
with android the app install process is in the cloud and controlled by google
Nonsense. Unlike the iPhone, Android has always allowed installation of apps without going through the store. You can download them through the web browser, install them from the SD card, and there are 3rd party market apps that compete with the Google market.
1998 called. It wants its flamewar back.
One of the primary reasons manual memory management sucks is even if your code is beautiful and perfect and doesn't let a single byte slip by, you might be required to interface with a library written by a neanderthal that codes by beating his club on the keyboard and creates all sorts of incidental memory errors that don't affect the immediate functioning of the library, but have side effects for your program as a whole (memory leaks especially). Memory safe, garbage collected languages dramatically reduce these sorts of problems.
Even if it's ASCII or a picture, just encrypt it twice.
I've always wondered what would happen if you were to encrypt a file over and over again, with different keys.
You get Triple-DES.
Also, consider that encryption algorithms are not magic. Having no distinguishable pattern to attacker is the goal, but the data is not actually random! Encryption comes down to applying a set of mathematical transformations on your data which leaves "fingerprints" in the cyphertext if you know what to look for. Applying more than one algorithm or the same algorithm more than once at best adds a security-through-obscurity aspect to hinder reverse engineering, at worst may introduce patterns that make your cyphertext easier to attack compared to simply increasing the key size used with a single well designed algorithm.
IANASR (I am not a security researcher)
I had higher hopes for the original article in discussing specific technical reasons for choose one API over the other aside from the issue of platform support.
From my perspective, the the controversy boils down to a handful of actual issues:
* Quality of drivers. D3D drivers have historically been more solid than OpenGL drivers on Windows. This is less of an issue these days with Nvidia. Unfortunately ATI OpenGL drivers remain a bit flaky.
* Market. I believe that the very high end graphics workstation market (think Hollywood CGI artists, CAD, etc) is still invested heavily Unix (Linux) based tools. Nvidia has a much bigger foothold in this market than ATI which explains why Nvidia has superior X.org drivers and better OpenGL support all around.
* Bleeding edge technical features, if you are trying to achieve some advantage in rendering quality over your competitors. This makes sense in the graphical arms race of gaming, but most of the rest of the visual simulation industry (3D modeling, CAD, scientific computing, government/military, etc) don't care about the cutting edge as much.
* What your 3D engine of choice supports. Writing a whole 3D engine from scratch is going to be silly most of the time with the many commercial and open source 3D engines now available, so you are not going to be writing a whole lot of bare D3D or OpenGL code.
Like a lot of other areas, Microsoft's development solutions work great if you stay in the Microsoft ecosystem. As a pure business decision sometimes it makes sense.
What irks people (including me) is when Microsoft deliberately or de factor freezes out the competition; this is where we end up with frustrating situations like the case of ATI having inferior support for OpenGL on Windows. There's no technical reason for it, just someone manager's decision on how to allocate developer resources. Longtime Linux users know this is a story that has played out with many devices; usually there is no technical reason a piece of hardware can't be used on Linux, it is simply a matter of the manufacture choosing whether or not to devote additional resources to supporting platforms other than the one with the biggest market share.
So ultimately it is about mindshare and putting pressure on Nvidia and ATI step up to the plate to have good OpenGL support, and encourage Microsoft it is not in their best interests to screw over Windows OpenGL users.
(did I mention enough times how much ATI OpenGL driver quirks annoy me?)
America's Army licenses the Unreal engine.
The closest I know of to a general purpose engine intended for military simulation apps would be Delta 3D (http://delta3d.org) but that is more of a small-scale academic project than a robust product effort aimed at developing a product.
Open Scene Graph (http://openscenegraph.org) is pretty widespread in the visual simulation industry, but hasn't gotten much traction in PC/console games sector.
The Army deals a lot with modeling real-world places based on GIS data, which creates a whole slew of toolchain and scale requirements that are not typical in the entertainment industry, where you get to make stuff up.
Also, personally I think Army sims should invest more in graphics quality to improve immersion, but the management usually see it as a waste of money when you could be madly cramming more features into the product instead. Pretty much the same issue with any other software development, actually.
From experience, working on an Android app in my spare time -
1) The SDK runs on Windows, Linux and OS X. This is a big plus since you can do development using desktop platform of your choice.
2) Android is Java based, which is a relatively civilized language compared to C/C++/Objective C (the relatively safe memory model of Java avoids whole classes of bugs based on memory mismanagement, buffer overflows or wild pointers).
3) Eclipse is a pretty powerful development environment. Having not used it prior to Android development, I'm pretty impressed at its ability to detect and offer to fix syntax errors automatically.
4) Running and debugging your app in the simulator Just Works
5) Access to existing Java class libraries and ability to share code with desktop apps (with some reservations, as android does not support the entire java.* standard library)
6) Multiple ways to install your app on an actual device without going through the Market (can download the
Overall, I would say the development experience is pretty close to normal desktop app development. There isn't a big feeling of "going without" that I would have expected from embedded development - the one exception being filesystem storage, as users cannot be expected to download and install hundreds of megabytes of data required by your app as might be the case on the desktop.