When you're new to go, one of the first things you learn is searching for "golang" instead of "go"
When you're new to go, one of the first things you learn is searching for "golang" instead of "go"
I think the long-term real-world potential is already showing in the "devops" world, with a lot of new interesting and cool projects being written in Go:
- Consul, packer, otto, terraform,
I don't think Go will replace webstacks, it's more suitable for backends (with webservices) imho
- cli tools
- server software, especially stuff offering webservices. Has a mature http(s) server in the stdlib, and there are surprisingly mature libraries with DNS server, SSH server/client,
Don't: GUI stuff, the libraries just aren't there.
Questionable: Web stuff. Ok if you have a static frontend which talks to webservices implemented in Go. Using Go stricly to serve dynamic pages is possible, but I wouldn't recommend it.
> I think Python would have a bit lower footprint in terms of disk space used and memory consumed so it might be a bit faster in a resource constrained environment where a C programmer might find themselves, but for most applications you'd write in Python I doubt there is much of a noticeable difference if you wrote it in Go.
A Python runtime + all scripts and used modules would consume a lot more diskspace than a Go application. Go might produce pretty big binaries, but on my system, the python2.7 binary is already 3.2mb and the python3 3.9mb. Start adding modules and tools like pip you need to deploy stuff - and this grows quickly.
Go's upside here is that the binaries produced are completely static, without any external dependencies (including libc, libpthread,
Also Python faster than Go? Sorry, but on most systems with a default CPython install, a Go produced binary will run circles around it, especially if it comes to concurrent programs where the gil still is a major bottleneck. Pypy might come close, or even pass Go in certain benchmarks for a limited set of features, but I highly doubt that would show in real-world scenario's.
I've used both languages, and Python is getting a lot of heat from Go, which I understand. The latter is a lot easier to deploy, has a massive stdlib with stuff you care about, and while sometimes more verbose and limited, is actually a pretty nice language to code in once you get used to some annoyances. Most of my "quick & dirty" scripts are still Python, but once it becomes something more, it's usually written in Go which offers a built-in testing and benchmark framework.
The problem with simply displaying 3D models is that it is not flexible. Right now, there are working ports of Quake to WebGL, you wouldn't be able to do that with if you limited yourself to providing a default engine.
I think the major turning point could be when the dominating mobile browsers (read: webkit) adopt WebGL and have decent performance, I think things could change. The original iPhone's view was "all apps are on the web", and now comes a bit closer. I don't believe these would actually replace native apps any time soon, certainly since both Apple and Google now invested too much in their app-stores and native development kit, but a lot of these "native" apps already are simple embedded web-views wrapped in a tiny native application, and this could then easily expand to games.
The issue here at hand is, the major market for 3D is games. You won't see a fully-featured professional 3D modelling/rendering application any time soon as web-apps, even if WebGL would be succesfull. Games right now, are dominated by Microsoft's DirectX, on which they bet big, and actually won (for the time being). They dominated OpenGL for the last decade with only a few exceptions (the Quake engine being one) and will be reluctant to adopt WebGL. Right now however, Valve is betting on Linux and Mac, and the latter in general is gaining grounds in normal households, which means these are OpenGL-only machines. Yes game devs can use Wine libraries to translate DX calls to OpenGL, but this always has its downsides. Also, all mobile development, where the majority of new games are developed, is dominated by OpenGL (ES). Point is, from now on, DirectX will only lose ground. Maybe some day, Microsoft will be forced to adopt WebGL..
Only for movies and documents. Apps have to be installed on the internal storage.
I don't get the "I want usb2 on my tablet"... I used to be a tablet skeptic until I actually used an iPad, and the point of a tablet, it being Android, iPad, Windows 8,
SD-cards or some form of external memory would be usefull, at least if the OS's would support it properly, and I haven't seen any OS (including Android) that doesn't require a lot of tricks to ensure everything works fine when removing the storage (running apps on external memory which is removed anyone?).
Funny thing is, Joan Daemen and Vincent Rijmen - the ones who developed the Rijndael which later became AES, both worked at the KUL...
Other version of the story: others indeed paid 4 billion and got their hands on the patents, which wasn't what Google had in mind. Apple/MS won the deal, after which Google decided to publicly accuse them of wanting to attack Android. Microsoft then responds with "we invited you to join the party", but Google didn't want to, since they wouldn't be able to use these patents in their defense against Apple or Microsoft, both major patent holders related to mobile/smartphones - which sounds very logical to me. But that would imply Google actually really wanted the patents, and doesn't really sound like an overbidding plan, does it? So why did MS/Apple bid for these patents? They didn't want them to be used against them, and asked Google to join them, so they could split the bill. Sure, if Google hadn't placed such massive bids, they would have gotten them cheaper, but doesn't sound like a good strategy when fighting against 2 companies with both a massive cash-reserve and working together... The outcome offers no strategic advance at all for Google. If MS or Apple would have been short on cash, sure - that would have been another story, but now? The only real winner in this situation was the party actually receiving the money. If that was a strategic move, is was an absolutely moronic-one against companies with a lot more experience on this level.
Some people say the talks with Motorola had only been going on since that 4 billion fiasco, but I don't really believe that. I just think it gave Motorola a clear upper hand in the acquisition negotiations, I mean - 2.5 billion if the deal fails? And the announcement that Motorola would sue other Android manufacturers? Strictly a strategic move from Motorola to push the price up, and it worked, Google will pay WAY more than the company's was valuated. Stock shot up from $24 to $38 after the announcement - just check MMI on Nasdaq. That's more than +50% - and even with that massive spike, the total market cap still is only a bit more than 11 billion. But Google needs it, it needs the patents, to be able to force other Android-phone manufacturers into a patent consortium, which then would defend Android's interests, otherwise it could cost Google a lot more than this 12,5 billion... Motorola knew that, and had Google by the balls.
Oh - and the mobility division but is making losses year after year, it is not "a workable company to boot with". It will require major restructuring to make it profitable, and this will take time, and again - a lot of money, so no it's not a workable company to boot with. The outcome, I hope, is that Google manages to restructure and make this division profitable, and finally create phones that can compete with iPhones on hardware level. And also, end this entire patent-bullshit. Google ignored patents completely when it comes to Android, and now they just realized what a mistake this was. They'll be in a much stronger position now, but it appears it will cost them way more than they ever expected...
Who in their right mind would pay 4 BILLION for a distraction? Don't forget that if Apple/Microsoft decided that that much money wasn't worth it - Google would have bought that distraction for that sum, and I'm pretty sure that would have prevented them from spending another 12,5 BILLION on Motorola. I won't even go into what what shareholders might think about such a thing, right now the market clearly isn't too confident in what just happened at Google... There aren't many companies that could afford such cash-spending. Sadly for Google, Apple is pretty much the only-one with such a huge cash-reserve who can afford itself such massive buyouts, and even Apple chooses not to do it alone.
Google has a very weak patent portfolio. They're in the same situation Microsoft once was, and they decided to hire the guy who used to be responsible for the IBM patent portfolio... Google likes the patent system just as little as Microsoft liked it (there are some very vocal critics of Bill Gates against software patents) - but now realizes they have to invest in them anyway, just to defend themselfs.
Also, don't forget, Google is not a hardware company, they have zero experience on this level. Yes they have the Chrome notebook and Nexus phones, but they were all designed and built by other companies with their approval.
No, sorry, no review sites praising Android and RIM tablets available. They must all be on Apple's payroll..
You can also interprete history like this: if you can get the developers behind you, your platform wins. That is how MS 'won' the pc-wars in the '90s. That and lack of vision by Apple back then. And guess which platform has all the developers behind it right now? Also, Android is nowhere near the #1 mobile os. It only is if you only count "iphone sales" and ignore the iPads and iPod touches, 2 devices which have proven to be massively popular. Just to demonstrate what I mean: I have 14 collegues. There are 6 iPhones, 5 iPod touches, 4 iPads, 1 samsung galaxy tab, and 6 Android phones. 6 vs 15... And all iPod touches except for one are owned by ppl having an Android phone...
And on phones, yes, Android is a serious player there. But Android phones are mostly pushed through carriers, and people know a phone. What it's supposed to be it's primary use. Google trying to sell it's own Nexus one was a failure. What tablets are on the other hand are something new. And the only way you can demonstrate a non-tech person what it is, is by showing it. It just happens that the apps ecosystem is one of its primary strengths, you should understand if your iPhone 4 is filled with apps. And that on tablets, at this moment, can only be demonstrated on the iPad, and it's a going to be a though job for Android enter this market with nobody pushing the devices. It's the chicken and egg problem there. Nobody is buying the tablets because there are no apps, and nobody is making the apps because there is no existing market. Unless someone pays for the development of a few key killer apps for Android, the platform is going nowhere in the tablet market.
That said, I really hope Android tablets improve and would prove a serious contender for the iPad, just to kick Apple in the nuts now and then, since iOS can still be improved a lot (notifications anyone?)
It doesn't do anything "new" - it just does it differently. It's about comfort. You pick it up, you don't have to wait for anything, do your thing, and put it down again. And doing that stuff anywhere you want. If you have a 3G-capable version (imho silly to buy a non-3G version), no fiddling with wireless sticks, crappy drivers and custom, carrier-specific login-programs.
I use my recently bought iPad for lazy reading of my rss feeds in my couch using Reeder (which links with my google reader account), streaming a movie from my pc to me using Plex, reading a PDF tech-specs at work I need for doing actual work on my laptop, checking my email and calendar, browsing, playing games, checking the weather and news, GPS in my car, instant messaging, checking out my social network stuff and random news with the awesome Flipboard app, quickly checking a server using SSH on the road,
And if somethings else comes up, I just put it down, it's not even necessary to close the cover or tap the lock button, and put it away with the comfort of a book. No clumsy closing of the lid, balancing my laptop, checking if my laptop is not running out of battery, if I would be able to do with the current battery-charge,
First of all, the tech-aspect is not what the average Joe is looking at. Faster cpu? better cameras? higher resolution? They wouldn't know what the hell you are talking about. Oh look it can play the cool game or do the cool stuff my friend can do with his phone! This is a real-life situation I've withnessed: 3 college girls talking about apps on the apple appstore, one holding an iPad, one an iPhone, and a 3rd looking very interested. 10 minutes later they're browsing through some fashion-brand app and talking about shoes & stuff. You really think that 3rd girl would EVER consider buying a tablet that can't do that? And if she got one, she would think it sucks.
That is your average user. They do not care about specs. They care about the cool stuff they can do with it. And you know? They're absolutely right. Better camera? As if a tablet would be a replacement for a 'real' camera - or even a phone camera? SD card reader? What for? Storage expansion? Storing camera photo's on the tablet? Most people just attach their camera to their PC using usb, and using the standard crapware that came with the camera to store them on their pc. Faster cpu and more memory? They won't even know what you're talking about. USB? What are you going to connect to it? Keyboard? Camera? Printer? Seriously?? The point of a tablet is portability. Wireless is the key.
Flash? That must be the most pointless argument ever. Users don't even know what the hell it is. And if what I've seen from flash on Android phones reflects the user experience on the tablet, most users would agree it sucks. Also, most videos on the web - the primary use for flash - nowadays play perfectly on the iPad. If your site's video is not working, on other sites it works just fine, your site is broken. Not the tablet. I never understood that a closed tech with such a bad security-trackrecord as flash would suddenly become an argument "pro" an "open" platform. Although the interpretation of Android's openness seems to be subject to Google's will.
Oh, and as someone who had his own software development company - I wouldn't risk investing loads of my own money in a platform which could very well fail miserably.The iPad's market however is something easily accessible and visible for everybody. And the market is there, people know it, and it "just works".
Sorry but that's absolutely wrong. The reason why movies and tv work at 24fps and games don't is simple: motion blurring. A camera captures all motion within a certain timeframe (equal to the shutter-time of the camera). A computer renders a single snapshot of this motion, which appears very sharp. For the first, your brain creates sharper images than there actually are, even at 24fps, and sees them as motion.
For the computer generated ultra-sharp snapshots however, you need a lot more images per second to convince your brain of it being a fluent motion. There is no way this is affected by screen size, focus or your eyes. It's your brain that has to be fooled, not the eye. And for sharp static images - you need at least 60fps to fool your brain in every situation, and Carmack knows this very well...
Maybe Computer Science should be in the College of Theology. -- R. S. Barton