I was in the AI lab at MIT, testing my wits against LISP. In walks Marvin Minsky.
I asked him if he could give me a tip or two about atoms.
His response to me was: "Well, why dont you wait until the computer speaks your language... Then program it in that?"
That was alot longer ago than 15 years...
Does anyone have any experience with the searching and indexing capabilities in these devices?
I want to store my technical library in one of these, but have been resistant to switch, because I want searching and indexing that allows me to use this device as a decent replacement for that 'Technical Library Search' usecase, where someone asks some questions, and I can use this device to do one search and have the results, across all the documents that i have stored, displayed in a results list. for easy access.
Does anyone have any experience with these functions in a variety of readers?
Well, for me its not about the putting the content on the reader. Its about using an e-reader to store, and search through the books that I will load onto it.
So, I am looking for slashdot to talk to me about doing search on an ebook reader.
This strikes me as time consuming to have to think the letters to type a word. I want to be able to think the word and have it appear. When do we get a semantic, bi-directional neural interface?
Think about this: When a person starts to think about a document I bet there is a planning part of my brain that is forming an outline of the document, before I even start to come up with the actual content in it. I'd LOVE to tap into that planning, and be able to lay out an outline, just by thinking about it, and then be able to fill that outline with content through thought alone.
Imagine applying this to code generation!
I have this software running on my phone, and it does work.
What stuns me is that while this thing is in 'beta' and returning poor search results, they have the opportunity to 'train up' the AI, while also keeping hold of a bevy of images that they collect from a few thousand (or hundred thousand) phones that geeks like us were willing to install it on...
I bet that the corpus of images they collect during the next 4 years - the beta period - will be pretty impressive, and kind of scary. I bet that they claim rights on all of them. I guess we need to start watermarking the photos from our phonecams.
Just my thoughts...
It really is about time that AT&T had motivation to actually upgrade their network so that it is usable. Considering the disparity between our infrastructure, and most of the rest of the world, I think this is progress that is along time coming.
I also think that it is very similar to the responses that AT&T had when DSL became a reality. Here in the Midwest, Southwestern Bell sat on DSL for YEARS before actually building it into their network. And they made close to the same excuse that AT&T is making about this.
Not amazed, not amused, just waiting...
Oh yeah, an Android phone would help to AT&T...
Pound for pound, the amoeba is the most vicious animal on earth.