How about making products your customers actually want? Like a MacBook Pro that's actually a pro-level computer. Or, a "Cheesegrater" Mac Pro with Thunderbolt and USB 3/3.1?
See, here's the deal: no one wanted the trash-can Mac Pro. We wanted the existing model with the I/O capabilities you put in your home-user machines. But, it's too late. You've lost us. We're tired of paying premium prices for last-years already outdated technology.
And you guys are really missing the bus with your lack of VR-compatible hardware. Sure, VR might be a flash in the pan, but isn't the fact that you make NOTHING with the CPU/GPU power to support it worrying?
RatBastard, a former Mac customer.
Your information is highly dated and perhaps your sources are also a bit biased. At any rate, 100x performance hit is stupid wrong.
Static translation was achieving 50-70% native performance rates (measured against clock cycles) with FX!32 on Windows NT for Alpha in the mid 90's. The problem of course has been very well studied since then particularly with the advent of virtualization and the x64 instruction set and the need to enhance the performance of x86 code running on even Intel's own platforms. Furthermore for any particularly glaring issues that are the fault of the hardware -- well it is much more easily tuned today than it once was. A bespoke opcode or extra register to assist in a specific task is no longer a monumental engineering undertaking today -- it is a matter mostly of dev/test/validate.
The approach taken with x64 to support x86 native execution is quite different than attacking the problem with emulation. Is there a performance hit? Certainly, but a hit of 10-20% simply doesnt make up for the fact that you might be able to have an 8 core ARM for the same price and power budget as a 2 core x86 mobile cpu. The applications that lose in this scenario are the ones the rely on raw single thread performance. Certainly some games are in this camp, but many games which make efficient use of threads are not.
Looks like Chris Rock called it. Well-done, sir.
I'd like a Mac Pro with Nvidia GPUs in them. Everything I use runs on CUDA and my 2012 Mac Pro is showing its age.
In-car systems such as this are a hopeless battle. There is absurd vendor lock-in because there are a whole of 2-3 companies who have built a technology base big enough to be able to offer a system that can be custom assembled for a particular year and model of car. This will then be deployed in about 100,000 cars at best and will never ever be updated or serviced after about 6 months unless there is a vehicle safety issue.
I'm not sure what the exact solution is, but in one way or another there needs to be a mandatory open standard to allow a 3rd party device to show information on vehicle displays, receive input from vehicle control interfaces (steering wheel buttons, touchscreens, etc) and interact with other auxillary systems. We have things like CarPlay and Android Auto, but despite manufactures pledging broad support, very few cars are actually being sold with such capability.
Liquor is sold in "pints" and "half pints" typically 375ml and 200ml in every place I've been to -- in fact it's very difficult to actually purchase the little single shot bottles in many states without purchasing a bunch of them together in a larger package.
Kenny Baker isn't dead; he's just returning the map.
In that case I would be asking what what Apple wants to do with distributed graph analytics because that was probably Turi's most interesting/unique product and expertise. They have a great library for handling extremely large graphs distributed over many nodes, and a lot of expertise in exactly how to do that really well.
I have to admit to being a little unclear as to Apple's plans here. I'm somewhat familiar with Turi's product offerings (at least, I was back when they were called Dato). It's more of pure data analytics tool than anything, and personally I found the underlying python libraries which are open source far more compelling than the point and click predictive analytics and charting GUIs which seemed to be their main product. And even on that front I would put more stock in scikit-learn, pandas, dask and the many open source deep learning libraries (mostly built on theano and tensor flow) if I really wanted to do machine learning and distributed machine learning.
Now don't get me wrong, Turi has some nice products, but they tend to be standalone suites designed to let front-line analysts have a nice GUI interface to basic machine learning tools, not "push the envelope AI". I really can't see what Apple would do with it beyond build up a business analytics suite to compete with Tableau and Azure ML. Anyone have any better ideas?
I do rendering on my 2012 12-Core Mac Pro. It would cost me almost three times what I paid for that to get a new 12-Core Mac Pro to replace it. Which, thanks to my software moving to CUDA, would make a new Mac Pro useless to me. Add to this the fact that no modern Mac supports CUDA due to Apple going with AMD GPUs, and that pool of software is growing ever smaller. I'm almost to the point of abandoning Macs altogether and getting a Windows multi-GPU workstation instead. For a hell of a lot less than a new Mac Pro.
Unless every vendor out there suddenly decides to support the trainwreck that is OpenCL, I've long ago purchased my last Mac.
Once the new version of CUDA is released and my rendering software will work with it. But none do right now.
Bell Labs Unix -- Reach out and grep someone.