Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:Not mysterious. Just lousy. (Score 3, Interesting) 229

The sad thing is I really like the OS, and I'd be happy to develop for it if they made development accessible and quit leaving trails of unfixed bugs behind them.

How exactly is developed not accessible?

- Apps do not have to be distributed through the Mac App Store.
- Xcode is provided for free along with all documentation. There are tons of other IDEs and languages as well.
- Yes, there are bugs, but all platforms have bugs. Surely as an OS X user you can see bugs as well.

I'm not sure what you're looking for to make development more accessible.

Comment Swift != Interface Builder (Score 1) 69

"But is Swift really so easy (or at least as easy as anything else in a developer's workflow)? This new walkthrough of Interface Builder (via Dice) shows that it's indeed simple to build an app with these custom tools... so long as the app itself is simple."

What? Seriously Slashdot, if you're going to have Apple articles, at least have submitters that have half a clue what they're talking about. "How good is Swift? Let's find out by using Interface Builder which is not Swift at all!"

Swift and Interface Builder can be used together, but they're not strongly related components. They're related like a WYSIWYG web tool, like Dreamweaver, and JavaScript. They're both helpful to get what you need done, but they don't replace each other. To give you an idea of how true that is, Interface Builder first shipped in 1986. Now, it's advanced a lot since then, but it's almost 20 years older than Swift, so obviously it's had a long life away from Swift.

Duh you can't create a big complicated app with only Interface Builder like you can't create a big fancy web app with just the visual components of Dreamweaver. You've got get down and actually write some code, which you know, is what Swift is, Swift being a coding language and all. So I find it really odd that this post is talking about reviewing a programming language in the context of trying to use a completely different tool that is not that programming language.

Comment Re:Obj-C (Score 1) 316

Automatic reference counting means adding retain and release messages automatically; there most certainly is a runtime hit, and that's on top of the usual memory allocator costs, which can be quite high. A good compiler can eliminate some of those retain/release calls.

Sure, but I would assume any manual memory management system worth it's salt is doing retain/release. Most of the C++ big libraries do it.

Furthermore, because of deallocation cascades, a release message in such schemes can have a very high latency (don't know whether Apple tried to add workarounds).

Two things:
- Deallocation cascades are inherent in memory management. Neither reference-countless memory management nor garbage collection can avoid it. So I'm not sure what the point here is.
- As far as latency on dispatches, Apple is using tagged pointers (http://en.wikipedia.org/wiki/Tagged_pointer) which is using some room in the pointer for holding the reference count. In practice, this means means there is practically no latency for messing with the retain count.

And, of course, ARC has the same problems with circular references that regular reference counting has.

Which is also a problem with Garbage Collection...

Reference counting is a mediocre memory management scheme at best; people use it in C-like languages because they don't have a choice. It is inferior in just about every way (runtime overhead, latency, memory utilization) to a good garbage collector.

I don't see how it is at all. Results are instant. Overhead is far less. Pretty much every claim here needs citation. It's hard to see how an entire process constantly analyzing object references is less overhead than pre-handling those references. Memory utilization? I have to burn a bunch of memory on a garbage collection process, and then watch my memory climb and then drop off a cliff constantly while I wait for the garbage collection to run, while retain counted count generally stays with a pretty stable allocation count because it's instantly computed. Puh-lease.

I saw your claim below that GC is identical to retain/release in behavior. It really is not. If I clear out a reference in Obj-C or any other retain/release language, the memory is instantly freed. In Java, I have to wait for another run of the garbage collector, which can take a while unless I manually trigger it. Yes, YOU don't have to wait for anything in the code. But ignorance is bliss.

Here's basically the comparison I'd use: Retain/release is like having an incinerator you throw your garbage into. Garbage collection is like... well... having a garbage truck. In both cases when I'm writing code I can just throw things away in my trash bin and pretend it's not there. With retain/release/an incinerator, the memory is actually gone as soon as I throw it in the trash bin. With garbage collection, I've thrown it in the bin and forgotten about it, but that doesn't change that the trash will continue piling up until the trash guy comes, which may still be a bit away. The large amount of piled up trash is also a problem, and actually makes the cascade problem you were concerned about with retain/release WORSE. Instead of a few cascade relationships being dealloc'd at once, all the dereferenced objects are going to pile up, wait for the garbage collector, and then the garbage collector is going to have to pour through thousands of relationships all in one go, causing the program to come to a halt while everything waits for the garbage collector to catch up. I've had to clean up a few Java messes that had that problem by manually firing the garbage collection to spread out the load.

Comment Re:Obj-C (Score 1) 316

CLR in this context means a very large standardized library, which is not subject to fragmentation nor availability. It runs or it doesn't, and it behaves as documented (by google or stack overflow, not necessarily MSDN).

That's not what the term CLR actually means (http://en.wikipedia.org/wiki/Common_Language_Runtime) nor does that necessarily apply to Swift. Swift does indeed have a slightly larger core function base than Obj-C, but still not enough to build an entire app. For example, there is no I/O (either file or console, except for printing to console), networking, or GUI support that is part of the core implementation. You won't be building much with the Swift core library by itself.

Here's a list of every single function present in the Swift standard library in June. That every single function can be listed on a web page should tell you that it's nowhere near as expansive as the Java or .Net standard libraries:
http://practicalswift.com/2014...

It's primary API intended use API is Cocoa, but that is entirely un-attached from the language. As of posting time, Cocoa is still entirely written in Obj-C, so the primary library intended for use by Swift is not even written in Swift itself. But I digress, Swift itself definitely does not have a very large standard library. When Swift is ported to other platforms you won't see Cocoa anywhere with it. And Cocoa is different on iOS and Mac, so even if you're sticking to Apple platforms you don't have a common library between the platforms. So right there, even if we decide Cocoa could be called Swift's standard library (which it isn't) it fails the fragmentation test you've put forward.

Not to mention, the R in CLR stands for runtime, and we're talking about the Swift standard library, not the runtime (which is the Obj-C runtime, not the Swift standard library anyway.)

Comment Re:Obj-C (Score 4, Informative) 316

It was my understanding that if you want "complete" control, you still need to use ObjC, and that Swift was for dashboards, things previously known as WebApps, and other lightweight situations where you aren't actually doing anything novel, just packaging an interface to a datastore or moving sprites around.

That said, Swift is just as good on inheritance as ObjC, and does garbage collection correctly (benefits of a CLR).

ObjC has been tuned to OS X/iOS, and if you write in ObjC, you should be able to make a single back end that's easily portable to OS X as well as iOS; Swift would be more for iOS only.

I really do like the real-time iteration available in Swift though.

That said, my opinion must be crap, because I'm older than Java too :D I still like Pascal and Common LISP, but wouldn't write a modern application in them (flashback to writing Avara mods in the 90's using ClarisWorks). Most stuff I write these days is in C or Python.

Oooof. So much wrong in a single post. Let's review....

Swift definitely does not do garbage collection. Obj-C actually had a garbage collector for a while (Swift never has) but it was optional, and support for it has ended.

What Obj-C has now is something called ARC (Automatic Reference Counting). At compile time (not run time) the compiler does a static analysis of the code and determines where it needs to add memory management code, and then quietly does so for you. This means there is no run time hit, and behind the scenes everything is still manual memory management. Sometimes you still need to hint to the compiler what to do (usually when trading pointers with C), but 99.99% of the time it just works.

Swift is built on the same runtime as Obj-C, so it inherits ARC. With Obj-C, you can turn ARC off and continue writing manual code, and I'm not sure if Swift allows the same, but it's the exact same behavior. Swift uses the same manual memory management functions as Obj-C in the background, while in the foreground the developer still writes without memory management. I'm not sure what this "benefits of a CLR" is you're talking about, as that's a term usually associated specifically with the Common Language Runtime of the Microsoft language family, but it's neither here nor there. Swift does not run in a VM (it's natively compiled just like C or Obj-C), and it does not have a garbage collector. (But the compiler will add your memory management code for you.)

As far as Swift being multi platform, Swift most definitely for sure runs on OS X, so the language choice has absolutely no bearing on what platforms you want to port between. I have a partially Swift project going on the Mac right now. Swift is definitely not iOS only. Beyond that, it looks like Apple will be working to open source much of it and move it to other platforms.

I'm not sure what this business is about Swift being for lightweight solutions. It runs on the same runtime as Obj-C, it's starting to be as fast as Obj-C, and it interoperates with any Obj-C code (as Obj-C will interoperate with any Swift code). Apple has never messaged that it's for lightweight apps, and developers aren't treating it that way. I still prefer Obj-C, but I'm not sure what that bit is about at all.

Comment iOS 8 compatible apps not related (Score 4, Interesting) 504

The iOS 8 app upgrades are pretty much for things like being able to target new/any screen sizes. If you're on an existing device, that doesn't mean much. I don't think there is anything in the new SDK that would imply a performance decline in apps that adopt it.

The X.0.0 upgrades are pretty well known for including slower/unoptimized drivers and code paths. Apple is usually in a hurry to get the release out the door and they don't do all the optimizations they should. Usually by X.0.1 or X.1 they get things cleaned up. So it doesn't surprise me that 8.0 is a little pokey. 7.0 had basically the same issues.

Comment Re:Please make this thing useful for development (Score 1, Insightful) 101

Don't forget the "nearly every platform" comment from TFA. Apps aren't currently designed for use with a mouse, but it doesn't have to stay that way. The Android app format is coming close to being the fabled "universal binary", finally giving developers the long-promised write once, run anywhere ability.

Heh. The dream of the 90s is alive on Slashdot.

It wouldn't be the first. Java and HTML/JavaScript long beat Android to the punch. In fact, HTML/JavaScript does it better. OpenGL ES on Android isn't exactly platform neutral (my Mac doesn't have an ES driver for it's Nvidia/Intel hardware so the best it can do is software rendering, while WebGL is abstracted so it can render it perfectly.)

We can use the lessons from it's forebearers to tell why it won't be adopted in the marketplace as a universal app solution. Both Java and HTML/CSS make universal app deployment technically a reality. For the past 20-ish years I've been able to write a Java app and deploy it on any platform. HTML/CSS run well on both desktop mobile devices as well.

The usability problem that is always run into is that by pretending all platforms are the same, the usability strengths of each platform are ignored. A mouse and pointer is a really really basic example that both iOS and Android can handle, but what about security models? The Android security model, OS X security model, iOS security model, and Windows security models are entirely different. Apple platforms like to give capability access capability by capability, at the time they are accessed. Android doesn't work like that at all, it wants everything up front. So an Android app trying to access my Address Book doesn't at all have the API to do so on my Mac.

Or what about contextual menus? I expect those on a Mac but Android doesn't have them. Macs also draw differently. They expect scrollable content to slow under window sidebars and titlebars. Android doesn't expect that. You can't make an Android app act like it's running natively on a Mac without reflowing all the widgets in the window. And Android apps don't have multiple windows. I expect that on a Mac. Mac applications also have toolbars (as do Windows applications) but Android doesn't even have an API for that. All Mac applications have a re-arrangable toolbar, but Windows doesn't. Mac and Windows computers can have multiple GPUs, which means that Android would need an API to handle a window having to shuffle from one GPU context to another, and I don't think it has that... There are also font layout issues. Both Mac and Windows have different default fonts which could dramatically shift around line spacing, and what text fits where. Mac at least also has contextual definitions when you right click on a word. Will Android apps have that? My Mac apps support QuickLook in the Finder, but there isn't anything like QuickLook under Android to abstract into. I also like searching with Spotlight, but Android apps don't have any Spotlight vendors. Do Android apps ask for my user name and password to do secured operations? Again, Android apps don't have any idea of on demand security, and I really don't want to have to enter my admin username/password every time I launch an Android app. Same thing would apply to UAC.

If you hadn't stopped reading by now, you might be starting to get my point. The reason Java failed to take the desktop world by storm is that not all desktops are the same or even have the same capabilities. Yes, as you suggested, you can go down the road of adding a bunch of APIs to handle all these different scenarios. But then you're back to writing a bunch of code to support a bunch of different platforms. It's right back where you started. Java didn't end up saving time for multi-platform because the dream of writing once and running anywhere was unobtainable for desktop GUI applications, and it still is for the same reasons. It's technically possible, but the same user experience everywhere was unacceptable to users and unworkable. Even Microsoft wasn't crazy enough to believe they could get away with write once, run anywhere for Office. Office on the Mac runs/looks different than the PC version because when they tried to make the Mac version look like the Windows version Mac users revolted. I've had Windows Office users equally pitch a fit when they've been set up with the Mac version of Office.

HTML/JavaScript sidestepped this by defining very broad APIs and letting the platform make inferences as to how things should work. It was also shaped by the communal cooperation of all the different platform vendors to build something that could be molded to work on everything. Android's API does not leave much room for the native platform to reinterpret application logic at all, and it wasn't written by the platform vendors together.

It wouldn't surprise me if Google added this capability to Chrome. It would surprise me if there was any wide uptake.

Comment Re:Is this technically impossible - no. (Score 2) 191

This works because iMessages are stored on your device, and not the server. So when you change your password, and update your devices password's the iMessages will re-transmit their history to other devices. So no, not wrong.

If you pull all of your devices offline and reset them, and then take them back online, the history won't be available to sync so all your messages will be gone. Apple does manage delivery, but the initial handshake is done by a peer to peer key exchange, so while Apple is caching and flinging data, they don't sit in the middle of the key exchange, so they can't read messages.

Email is another matter. The nature of how email works means they probably have some sort of access.

All the complaints about how buggy iMessages is make sense when you look at all the mechanics that they go through to keep messages secure.

Comment Re:Ask anyone still on Dial Up (Score 1) 533

4mbps is 100 times faster than dialup, if not more because where I can usually get the full speed of my broadband connection, I almost never got the full speed of dialup, usually around 33kbps. What took a week to download on dialup takes 1 hour on 4 mbps.

You're right. It's faster than dial up. That STILL doesn't make it broadband. The definition of broadband is not 'It's faster than dial up."

If we're still calling the 100 mbps cable connection I have now broadband 30 years from now because it's faster than dial up... Well that's just going to be stupid.

Comment Re:Ask anyone still on Dial Up (Score 5, Insightful) 533

Give anyone 4 mbps connection who is living in an area that still has dialup as their only option, and ask them if its broadband. If someone works to bring 4/1 mbps connections to more areas, they should be able to advertise it as broadband.

That's like saying I should be able to advertise my bicycle as a car if I'm selling it in an area that is still using horses.

Comment Re:Sigh (Score 1) 748

It's not even that. He can still hate people. He just can't ACT on it.

I'm not sure someone in that position could actually promise that, and I understand why the board would be uncomfortable with that.

His job entailed using his judgement to guide a company. Whatever he promises, his biases are part of that opinion. If the board doesn't like the sort of judgement he'd exercise in running the company, they're free to boot him. Especially if there was a risk that he might treat homosexual employees unfairly, which both opens the company to lawsuits, and could keep away good talent.

This reminds me of what the CEO of Urban Airship said: "Sure, I visit swingers clubs and sexually assaulted my girlfriend, but that's entirely separate from my work life." When you run a company, your personality and views are entirely relevant to your work life because they affect your judgement, which affects the company.

"My personal and work lives are entirely separate and I won't let me ideas from one affect the other" is totally a bogus excuse. You can't tell me that someone who does not like gay people at home is suddenly going to come to work, turn that entirely off, and then treat gay people like total equals and not discriminate in any fashion. Sorry, I don't buy it.

Comment Re:Minor detail glossed over in the headline (Score 2) 72

No. The phone should display a notification if an application is side loaded over USB. It shouldn't be possible to install an application without the user's knowledge. Trusting the connection should merely allow the phone and the computer to communicate. It should not allow remote control of the device.

It DOES display a notification when a computer attempts to establish a link, along with requiring user confirmation.

Comment Re:im happy google took this on (Score 1) 46

Just think different a little bit. Integrate the secure enclave into the button/sensor module.

I don't think that would work.

I'm pretty sure the secure enclave has authorization hooks to the hardware decryption on the CPU. Even if you moved the hardware encryption/decryption to the thumbprint reader, this brings up another problem with Ara... If you change the CPU or your hardware encryption module, do you loose your data if it was encrypted with the old key?

Comment Re:People expecting their marketing for free (Score 5, Interesting) 258

Too many people want to get rich by selling apps and expect Apple to pay for the marketing of their apps for free on the App Store.

I don't think this is quite what people are expecting. Rather, the problem is Apple directly prohibits most ways that an app can be promoted. Want to do a demo? No great way to do it in the app store. A trial? Forbidden. Want to offer a download directly from the developer? Nope.

So really what developers are requesting is simple: If Apple wants to directly hand hold the distribution and retail channel of an application, they either need to improve visibility for applications within that retail channel, or give developers more flexibility in how they can market applications. Apple isn't entirely responsible, but because they want developers to be so reliant on their store front, the argument is that Apple needs to actually provide a good store front to make that trade off worth it.

It would be like if you struck a deal with Target where they had full control over how your product was sold and exclusive rights to sell it, and then they stuck it in a dark corner of their store and never sold a single unit.

Comment Re:Too many apps, too much appcrap (Score 5, Informative) 258

Question for you, as someone who has developed a mobile app:

How much harder is it to optimize a mobile version of the webpage vs writing an app from scratch and getting it approved for App Store release?

Mobile developer here who has done hybrid apps, Android apps, iOS apps, web apps, etc.

It's hard.

Web apps do not get the native scrolling mechanism, so scrolling feels very funky in web apps. Web app developers write their own inertial scrolling mechanisms to try to deal with it, but web apps always feel wrong as a result.

You also don't get access to a lot of native functions. No barcode scanning. No access to the user's preloaded Facebook account (with authorization, of course.)

There is another problem in that, especially on Android, web technologies are just badly supported. It's getting better in more recent versions of Android where Chrome is actually the engine used end to end by everyone, but earlier versions still on Google's old ass version of WebKit blew chunks.

Loading can be a problem as well. Real apps by definition cache a certain amount of code and resources on the device. A web page has to fetch all resources from start to finish. So while a real app has it's loading UI cached on device, and can display it right away when the user taps a link, a web page has to go fetch a UI over the network to display a loading UI for the operation the web app is about to do over the network. Gross.

The other really messy thing is a real app is pretty easily able to figure out what kind of device it's on and render content accordingly. Web apps can kind of guess what type of display/device they are running on, but again, it can be messy. Especially with new things coming like Adaptive UI/multi windowing coming on iOS where your window or screen size may have no real connection to what kind of device you're running on. Web pages at this point basically assuming they're always rendering full screen on mobile, and do their layout computations based on that, but that looks like it will change on future iOS and Android devices.

You also have a problem with native widgets. If I code a real iOS app, if I run it on iOS 6, it looks like iOS 6. If I run it on iOS 7, it looks like iOS 7. I don't have to create new assets, the app automatically ingests the correct look from the widget set built into the OS. With a web page, I get the "joy" of building my widget set from scratch, and trying to make it at least resemble the system UI widgets the user has been trained to use. And better yet, if I make my web app look like an iOS app, I suddenly have a bunch of Android users unhappy my web app looks like an Android app.

Finally, web apps don't offer any way to be embedded as extensions on iOS, or activities on Android. You can kind of fake it with some really really ugly URL handling handshaking, but this is really problem prone.

TL; DR: Mobile web frameworks/browsers are still immature, and don't offer basically mobile specific functionality that's needed to do apps well. It's not a problem of it being hard to do a web app just as good as a native app, it's a problem of it being impossible because the feature sets just aren't there.

Slashdot Top Deals

2.4 statute miles of surgical tubing at Yale U. = 1 I.V.League

Working...