Nesting virtualization containers can be useful to test VMs on an OS you don't/can't run.
Slashdot videos: Now with more Slashdot!
We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).
FYI Java 8 includes a new date library, so that problem is fixed.
With regards to Eclipse, I've always found its windowing toolkit to be less than satisfactory, and prone to crashes. I don't know whether or not it is even an option for you, but NetBeans (latest version) runs much better on FreeBSD in my experience. Maybe give it a try. At the very least NetBeans is a lot more functional out of the box than Eclipse.
Now if only JavaFX 2+ would work on FreeBSD I would be a happy man.
With FreeBSD you've got to be a little bit more picky about the hardware. I can highly recommend using an nVidia video card. This will allow you to get full OpenGL acceleration (for Minecraft) and h264/vc1 acceleration in mplayer with libvdpau (makes sure to build the port manually, as that option is not selected by default). Flash is a little more finicky, as it uses the Linux emulation layer. Fortunately the internet is moving to html5 video which is well supported by Firefox/Chrome on FreeBSD. So youtube works fine without flash.
I've been using FreeBSD on my media box since about version 6, and on various servers professionally as well as at home.
Also, ZFS fucking rules.
Wow, a little bit condescending aren't we? Of course I know what an "alpha layer" is, that is why I didn't mention it my post. I merely showed that with sufficient effort good quality GIFs can be made, reducing the need for yet another "not quite video" standard. Speaking of standards, APNG was rejected by PNG. It is only supported by Firefox. So animated GIF is certainly better supported and "more standard" then the proposed alternative.
It's worth noting that GIFs may overlay multiple image blocks with separate color pallets, resulting in true color images.
The problem here is that some browsers (chrome) insert an artificial 0.1s delay between "frames".
Also if you can do this with GIF one has to wonder if APNG has actually any viability other than as a source format.
Huh? Did you read the same article as I did? As far as I can tell, the article is about a TCP congestion control algorithm, which runs on both endpoints of the connection, and has nothing to do with QoS on intermediate routers. The algorithm generates a set of rules based on three parameters resulting in a set of actions to take like increasing advertised receive window and tx rate control. The result of which is a vastly improved total network throughput (and lower latency) without changing the network itself.
I fail to see the relevance of predictive/adaptive caching. It isn't even mentioned in the article.
Some old games (duck tales comes to mind) just didn't care about the the time and always ran their physics/render loop as fast as possible, with a fixed time step. Later, games (Classic Unreal for instance) started to use the RDTSC instruction that came with the Pentium, which subsequently lead to problems on SMP systems and/or CPUs with power saving features, due to a non-constant tick rate. I suspect Mech Warror 3 also uses it. Microsoft "fixed" that by introducing the QueryPerformanceCounter() API. Funnily enough, because a cheap high frequency time counter is also very valuable to OS developers, the RDTSC instruction now no longer returns the number of clocks but is a constant rate counter and old games work again, especially if the OS takes care to somewhat synchronize the TSC across all CPUs.
It seems timekeeping is still very much in a state of flux, with several timers available to choose from: 8253 timer, ACPI timer, LAPIC timers (per cpu), HPET timers and finally TSCs on each core, each with their own drawbacks.
> UpdatePhysics( elapsedTime );
IMHO the biggest problem with that is that it leads to jitter in the animation because it is impossible to determine in advance how much time you will spend rendering the subsequent frame/when the frame will actually be displayed (unless V-sync is turned off, then it becomes slightly easier). I don't think replays or synchronization between multiple computers are compelling arguments for fixed rate physics simulations, because running your physics simulation in lockstep with the server is basically impossible (and unnecessary IMHO); for replays all that is required is a log of all important events with an associated timestamp. I remember some multiplayer games from the past who did fixed step simulations and they all sucked because they limited the speed to the slowest computer. It is much better to have the server (fast computer) run an accurate simulation and the slow client to run an inaccurate simulation which is corrected frequently by updates from the server.
I'm also not entirely convinced by your examples. In my experience, exploding physics mainly happen if there is an unexpected delay between simulation steps. It looks to me like your simulation isn't properly damping the springs, or the time-step size is exceptionally jittery, but I could be wrong of course.
It's not always about the data relationships. Cassandra for example is very easy to scale horizontally (much easier than traditional databases) and can achieve very high throughput. Last time I checked (a year ago) I could get over 50,000 stores/queries per second on a cluster of cheap commodity hardware (4 servers). That result was achieved with full redundancy (n=2). Such a setup is very resilient against failure (provided clients handle failure of individual nodes correctly). Maintaining such a cluster is also a breeze, with the ability to pull servers at will while operations continue to run. You no longer have to deal with brittle master-master/slave setups.
At the time I checked and tested about 10 different "NoSQL" solutions for viability. I had these requirements in mind:
1) Must scale horizontally, no single master dependency and must continue to work when any single node in the cluster fails.
Lots of NoSQL solutions failed this requirement because they had explicit master servers or didn't do redundant data storage.
2) Must perform at least 10,000 reads/writes of tuples per second per node on the bladeservers we had available.
Again lots of NoSQL solutions failed to perform. Some were incredibly slow, with less than a 1000 queries/sec/node.
3) Must have good management tools.
Most NoSQL databases were crap in this department.
4) Must be well supported by open source (Java) libraries.
Most of them were, but a lot of them failed to cope correctly with unreachable/failed cluster nodes.
In the end Apache Cassandra was the only one which fulfilled all my requirements.
Our use cases were persistent caching (as a cache layer behind memcached), and high volume (simple) data storage.
Yes, that is exactly the way to enjoy FreeBSD - use it for what it's good at. FreeBSD + nVidia is awesome. State of the art compilers, every port installs its development headers, knowing that _you_ are in complete control of the system instead of the other way around. Outstanding development platform. I love it!
Hell even MySQL can get 99% availability. Just re-import the entire database once a year. I am not even kidding
Watson wants to be able to change hardware as well as software in his research, instead of only software. He explains that changes to the hardware allow greater performance and/or capability of (for instance) the capsicum framework. Keep in mind that R. Watson is a researcher, not a product developer.
The way you program the GPU has drastically changed since the advent of shaders, for both Direct3D and OpenGL. In the end, both APIs are now glorified memory managers (for texture and vertex buffers); all the actual effects and vertex transformations are programmed using shaders. Any other functionality they offer (matrix operations for instance) can be replaced by other libraries. Annoyingly both APIs use a very similar but different shader language. ATI vs nVidia is still a problem though in terms of the stuff you can do with shaders (and the quality of the drivers).
You will find that the most important vendor specific extensions people have been using for years are now in the core library (in particular vertex buffers).
Couldn't agree more. It is always good to speak to someone on the same level about the problems you're trying to solve; especially if they're architecture related with large ramifications if you get it wrong.
On the subject of splitting hairs...
1) Every ATM locks you out after a couple of tries.
2) You don't find the first digit. You find all of them at once with 1/(10000-n) probability (where n is the number of known bad codes).
3) Terminator 2 is awesome.