Whitman lost to Jerry Brown, BTW, thus earning Brown the singular distinction of having to clean up the mess left by a B-grade movie actor twice.
Very very occasionally, if the description sounds interesting, I'll paste the description/requirements into Google. Most of these spamming third-party recruiters just copy-paste from public job postings, so Google can usually find the original posting on the employer's Web site.
They've clarified this many times.
No, they haven't. All the "clarifications" I can find are simply regurgitations of the same ambiguous phrasing.
When you realize that Microsoft have been openly discussing a subscription-based version of Windows, then the phrase, "Free for the first year," takes on an entirely different meaning, now doesn't it? Microsoft has not clarified this, even to discredit it.
And even if MS isn't planning on a subscription-based flavor of Windows, they still have been abundantly less than clear exactly which version of Windows 10 you'll be receiving for free. Will it be a kind-for-kind trade (Home version for Home version, "Pro" version for "Pro" version, etc.), or will everyone get the lowest tier SKU available, probably with Bing plastered everywhere?
It would be nice if I were wrong about this. But Microsoft's history demands that I be very suspicious of Gateses bearing gifts.
All of which makes me deeply suspicious of what this "free" version of Windows actually is. We clearly haven't been told the whole story yet.
As it happens, about three years ago I started doing an irregular series of Let's Play/Drown Out videos on YouTube with my colleage, GammaDev. Both of us are former employees of 3DO, and we covered The Deal that Never Happened in a video about two years ago (seek to 25:12).
Frankly, I'm having a hard time seeing how Lenovo recovers from this.
- Expand systemd to the point where large swaths of everything depend on it, so that he is controlling as much of the code base as possible.
- Insult Linus Torvalds for a while to try to undermine his authority.
- Fork Linux, or demand that Linus give control of Linux over to him, or he will rage-quit and take his code with him.
I don't see it unfolding that way. Remember what happened when BitKeeper tried to get up in his business. Linus, if provoked, could write an init/system management framework in a couple weeks (and probably name it "twerp" or some such). And I suspect he would do so long before things got to stage #3, just to prove the point.
C'mon, guys, this is copy-pasted marketing fluff. Better is expected of you.
He's implying that developers will specify a complete environment where every DLL available to the application within the environment is exactly what the developer used. There is no DLL hell because you run what the developer ran, and it doesn't matter if you have seventeen different incompatible versions of (to pick windows example everyone's familiar with) mfc42.dll, because things inside the container won't know that you have those dlls.
In that case, why bother with dynamic linking at all? Why not statically link everything? The effect is essentially the same -- you get exactly what the developer had. You also get no shared code pages -- even if you're using exactly the same library as someone else -- and bloated memory and disk usage since you have your own private copy of everything. Disk may be "cheap," but it's still surprisingly easy to fill up a 16GB eMMC device.
"You can update transactionally!!" Great. What does that mean? Is it like git add newapp; git commit -a? If so, how do I back out a program I installed three installations ago?
Transactional updates have lots of useful properties: if they are done well, you can know EXACTLY what's running on a particular system, [
You can roll updates back, [
dpkg -i <previous_version>
...lets you choose exactly the capabilities you want for yourself, rather than having someone else force you to use a particular tool.
Because there is a single repository of frameworks and packages, and each of them has a digital fingerprint that cannot be faked, two people on opposite ends of the world can compare their systems and know that they are running exactly the same versions of the system and apps.
Developers of snappy apps get much more freedom to bundle the exact versions of libraries that they want to use with their apps.
...Did this guy just say he brought DLL Hell to Linux? Help me to understand how he didn't just say that.
I bet the average system on the cloud ends up with about three packages installed, total! Try this sort of output:
$ snappy info
frameworks: docker, panamax
That's much easier to manage and reason about at scale.
No, it isn't!! What the hell is OwnCloud pulling in? What's it using as an HTTP server? As an SSL/TLS stack? Is it the one with the Heartbleed bug, the POODLE bug, or some new bug kluged in by the app vendor to add some pet feature that was rejected from upstream because it was plainly stupid?
Honestly, I'm really not getting this. It just sounds like they created a pile of tools that lets "cloud" administrators be supremely lazy. What am I missing here?
I worked for NTG/3DO for just under five years, so I know (knew) the machine inside and out. It will be interesting to go through this code and see what kind of tradeoffs were made.
Some comments on the README:
My friends at 3DO were begging for DOOM to be on their platform and with christmas 1995 coming soon (I took this job in August of 1995, with a mid October golden master date), I literally lived in my office, only taking breaks to take a nap and got this port completed.
*snerk* I could have told you at the time that a ten-week dev cycle was crazy talk.
3DO's operating system was designed around running an app and purging, there was numerous bugs caused by memory leaks. So when I wanted to load the Logicware and id software logos on startup, the 3DO leaked the memory so to solve that, I created two apps, one to draw the 3do logo and the other to show the logicware logo. After they executed, they were purged from memory and the main game could run without loss of memory.
An interesting and valid approach (3DO's OS had full memory tracking). I'd be interested to know which of the 3DO libs was leaking memory on you.
The verticle walls were drawn with strips using the cell engine. However, the cell engine can't handle 3D perspective so the floors and ceilings were drawn with software rendering. I simply ran out of time to translate the code to use the cell engine because the implementation I had caused texture tearing.
Were the floor/ceiling textures not power-of-two dimensions on each side? As I recall, you only got texture cracking when the dimensions were not power-of-two.
You could have decomposed the floor/ceiling textures into strips as well, but ultimately the lack of perspective correction meant you were going to have to do some heavy lifting somewhere.
I had to write my own string.h ANSI C library because the one 3DO supplied with their compiler had bugs! string.h??? How can you screw that up!?!?! They did! I spent a day writing all of the functions I needed in ARM 6 assembly.
Ah, yes, the Norcroft compiler (or, as I always called it, Norcruft). It was a piece of shit. It was also the only thing available that would run on the Mac. It was never anything but a C compiler, but kept throwing unblockable warnings about constructs that C++ would have problems with (such as implicit cast from void*). There was no MacOS port of GCC, and there were no usable ARM backends for GCC available at the time, anyway. (Bear in mind, this was before the Web existed in any familiar form, and you had to go trawling through USENET for clues -- not even AltaVista existed yet).
I hope that everyone who looks at this code, learns something from it, and I'd be happy to answer questions about the hell I went through to make this game. I only wished I had more time to actually polish this back in 1995 so instead of being the worst port of DOOM, it would have been the best one.
I'm sure many memories will come flooding back.