How AI Will Eat UI (artyomavanesov.com) 110
The inevitable day when machines learn to design our apps. From a report: When AR wearables hit the market, our apps will start tracking both our conscious and subconscious behavior. By measuring our heart rate, respiration, pupil size, and eye movement, our AIs will be able to map our psychology in high resolution. And armed with this information, our interfaces will morph and adapt to our mood as we go about our day. Future interfaces will not be curated, but tailored to fulfill our subconscious needs. Maybe the best way to navigate a digital ecosystem isn't through buttons and sliders. Maybe the solution is something more organic and abstract.
Autodesk is developing a system that uses Generative Design to create 3D models. You enter your requirements, and the system spits out a solution. The method has already produced drones, airplane parts, and hot rods. So it's only a matter of time before we start seeing AI-generated interfaces. This may all sounds far out, but the future tends to arrive sooner than we expect. One day, in a brave new world, we will look at contemporary interfaces the same way we look at an old typewriter; gawking at its crudeness and appreciating how far we've come.
Autodesk is developing a system that uses Generative Design to create 3D models. You enter your requirements, and the system spits out a solution. The method has already produced drones, airplane parts, and hot rods. So it's only a matter of time before we start seeing AI-generated interfaces. This may all sounds far out, but the future tends to arrive sooner than we expect. One day, in a brave new world, we will look at contemporary interfaces the same way we look at an old typewriter; gawking at its crudeness and appreciating how far we've come.
There might be hope (Score:3)
Re: (Score:2)
The problem with most UI designes are the following.
1. They are not customizable to everyone workflow, so no one really gets the optimal UI, they just the same workflow for different jobs.
2. They are so customizable that they cannot give good defaults.
3. The workflow usage always changes, but code wants to be static.
Re: (Score:2)
Needs to go further than conversation (Score:3)
conversational
Needs to go a step beyond that.
The US needs to infer anger and frustration on the part of the user.
If I keep giving my phone a dirty look every time it gives me a forth and fifth equivalent of an "are you sure" modal dialog, it should stop doing so. On the other hand, if my mom expresses relief that she's saved by those confirmation requests it should continue to give them to her.
Re: (Score:2)
I'm pretty sure that the US already infers anger and frustration on the part of the user (i.e., US citizens)
Re:There might be hope (Score:4, Insightful)
I dunno.
Almost nothing infuriates me more, than calling somewhere commercial for support, and having that stupid PHONE robot that insists you talk to them, rather than just let you push number buttons.
I especially hate that in an office setting, or our and about around other folks....
And, they never fucking get it right when talking it seems....
I don't want to talk to my devices....I want a button or prompt to push/touch....or even a gesture maybe, I hate trying to talk to a machine.
Hell, even at home, I get to the point of just repeating over and over "get me a fucking operator"...and it finally gets me to a real person.
Re: (Score:2)
I might be wrong about this, but I still imagine really powerful people at the top of the pyramid barking out orders more than typing and pointing and clicking. And so I gather that's what people really want, if technology can give it to them.
Also it's easy to imagine a new interface directly replacing an old one which is never quite right. You're never going to say to your car, "turn on the right blinker" instead of pulling the turn signal lever. But you probab
Re: (Score:2)
Hear, hear.
Same here, I never got accustomed to talking to a device. Maybe if it evolves to the point where I could talk to it like I talk to a human being, but until then any attempt falls straight to the bottom of the uncanny valley for me.
Re: (Score:2)
I dunno.
Almost nothing infuriates me more, than calling somewhere commercial for support, and having that stupid PHONE robot that insists you talk to them, rather than just let you push number buttons.
I especially hate that in an office setting, or our and about around other folks....
And, they never fucking get it right when talking it seems....
I don't want to talk to my devices....I want a button or prompt to push/touch....or even a gesture maybe, I hate trying to talk to a machine.
Hell, even at home, I get to the point of just repeating over and over "get me a fucking operator"...and it finally gets me to a real person.
Yeah, exactly. Sometimes other people are asleep in your home, so you want to do things quietly? Or sometimes other people are having conversations, or watching TV, or doing homework, and would just as soon not have to listen to you loudly repeating when you want to pick up your prescription or whatever, hoping that the bot will understand?
It's especially infuriating when the business previously had a perfectly good push button menu.
Re: (Score:2)
God I hope not. I read an article on comparative language data rates - how many bits per second of information can you actually convey with each different language - German, English, Spanish, Italian, Chinese, etc.
They found they all ended up at a rate of 39 bits per second. Conversational interfaces are incredibly slow - and that's person-to-person. Person-to-machine will be less, always.
While it has flexibility going for it, speech is just too dang slow for any useful work.
Re: (Score:3)
The problem with most UI designs is that the were made with one of three motivations: An art major completely disregarding actual usability in favor of their "artistic vision", an art major deliberately making something hard and painful to use as a "statement", or a company trying to cripple the user as much as possible.
Re: (Score:2)
I made a web app that had a UI that mimicked a command line program. Asking one question doing stuff ask an other. All the IT people hated it, heck I didn't like it much, but that method seem to help solve what the users were asking for. The Users loved it as it was easier for them to use.
Much of the Problems with UI falls we are making something to do something new, but relying on an old interface methods to do it.
Re: There might be hope (Score:2)
To be fair it is really hard to define good UI. I loathed doing website design because there is no theory. Of course it should look good on different screens, but ultimately use should define the position of everything to minimize use time and maximize probability of user getting what they want by poking at obvious things. But The vast majority of things CSS can do should not be done.
Re: (Score:2)
It's already somewhat here (Score:4, Informative)
I look forward to the day... (Score:3)
Re: (Score:2)
...or EMACS.
(...what?)
Re: (Score:2)
If there is any 'I' in the AI at all, it will look like vi.
Re: (Score:2)
Seriously, much as I like emacs, it's default interface is clearly "evolved" rather than designed-- it's a mess that people like myself like because we've gotten used to it-- and because we like hacking on it to tweak the details.
Wordstar actually did an excellent job of combining the power of a keyboard interface with the discoverability of menu driven designs-- therefore, it was forgotten completely as everyone ran off chasing new bright and shiney's.
There is no AI (Score:4, Insightful)
It doesn't exist. Now stop writing dumb ass articles about it. Thanks
Re: (Score:2, Insightful)
It doesn't exist. Now stop writing dumb ass articles about it. Thanks
There's no need to get hung up on semantics.
The article is talking about *something*, which certainly does exist. It's a way to direct computers that's different than the traditional approach of humans writing individual instructions into text files. The industry-standard term for this thing is "AI", although certainly isn't any kind of human or animal "intelligence", but neither of those things really matter.
What does matter is that it exists, and that it's already successfully employed in widespread use;
Re: There is no AI (Score:2)
Re:There is no AI (Score:4, Insightful)
Glorified table lookup is NOT A.I.
Stop buying into bullshit definitions (and articles.)
These articles are about "a.i." -- artificial ignorance -- where they appear have to pseudo intelligence but have no ability to reason and are basically fucking stupid about everything that except one tiny little sliver of a topic that has crunched numbers to make it appear it knows what it is doing.
Until there is an _actual_ test for consciousness the term A.I. is a bullshit term; this why I prefer the accurate acronym: a.i.
The term ML, Machine Learning, is a little more honest, but hijacking an existing term, Machine Language, is myopic and creates confusion.
If you can't even be honest and precise with definitions then there is little point in discussing anything afterwards since obviously you don't value honesty, clarity, or precision.
Re: (Score:1)
Not only are you hung up on semantics, you've got your panties in a wad about it.
Maybe it would be better for you to direct your anger at the manufacturers of Grape Nuts. At least you'd only have to persuade one party to change their inaccurate terminology instead of an entire industry.
Re: (Score:1, Flamebait)
They aren't semantics. They are words and they have meaning you insufferable fucking cunt
Re: (Score:1)
Let's try to keep the discussion civil please.
Ads hominems don't solve anything.
Parent is probably not even a programmer which is why they are ignorant about how "AI" even works and keeps whining about semantics.
Re: (Score:2)
Um, I was programming back in the 80s when I briefly studied in the field of AI during the original "boom". (That was when some academic eggheads thought that since Lisp macros allow you to write self-referential code, they had "Intelligence"!). I quickly concluded it was all BS because: a) Lisp is simply another computer language, and 2) you can't pipe human brain levels of pattern processing through a single 16-bit accumulator.
However, available since CPU power, storage and memory have each increased by 6
Re: (Score:2)
1) The language doesn't matter.
2) Human bairn level processing isn't needed for artificial intelligence. In fact, with one exception, there is no reason to think it would be.
The one exception being if we create something by backward engineering the human brain, literally.In that case the end result would very likely look like 'brain level processing'
Yes, we have systems that are intelligent.
intelligence /inteljns/
Learn to pronounce
noun
1.
the ability to acquire and apply knowledge and skills.
We have sytem th
Re: (Score:2)
They are a relational table lookup, the lookup defined by the relational nature of the query to a series of database table, linked together by the query and the query itself altered by it's progress through those various table taking into account past queries and the success of failures those result produced when applied and this use to alter the nature of future similar queries. The relational part and query alteration is important and how well that happens, how well the 'machine' reasons, will define how
That's amazing. (Score:5, Insightful)
You just said "they aren't semantics, they are words and they have meaning."
"Semantics" is also a word, with a meaning. Specifically: "the branch of linguistics and logic concerned with meaning."
So, your statement is directly self-contradictory.
And, since you agree that words have meanings, here is the actual meaning of "Artificial Intelligence" according to the dictionary: [merriam-webster.com]
I emphasized the word "imitate" because that is very relevant to this discussion. Machines "imitate" intelligent behavior, because they aren't actually intelligent. That is what "imitate" means, you see. Something is being faked. In this case, intelligence.
People keep insisting "it is not AI because it is not actually intelligent!" But that's the point. It isn't supposed to be actually intelligent. Not, at least, according to the actual definitions of the words being used here.
Perhaps you are thinking of something like "machine intelligence" or "synthetic intelligence," neither of which exist today nor are we anywhere close. But if we are talking about "artificial" intelligence, then we are talking about unintelligent machines doing specific things that usually require intelligence to do. That is to say, simple algorithms are used to generate the illusion of intelligence (which isn't actually there). Or, to state it in one word, "imitation."
Re: (Score:3)
That's a bad definition. We have a name for that, and it is "simulated intelligence", or SI. Artificial intelligence is what actual intelligence assembled by humans is meant to be called. In itself, machine learning is just a strategy for one or both of these things, and not directly comparable to either.
Re: (Score:2)
Look up the word intelligence. Then look at many modern computer systems. They do the same thing.
Calling it machine learning is no different then calling the brain 'cell learning'
I can not stress this enough. It's artificial Intelligence, not Artificial Emotions.*
Remove all the emotional component to your meaning of intelligence.
*Which is good, because we don't want the system calculating the debt to have a bad hair day!
Re: (Score:2)
"the ability to acquire and apply knowledge and skills."
Computers are good at acquiring knowledge. But they're not good at making sense of it. Neither are we at first. We have to learn how to learn, and then we have to learn how to be sensible. Many humans never become very intelligent, either. But computers don't learn how to learn new things. They are still limited to learning what they've been taught to learn. Most of the things we call AI aren't able to acquire new skills at all, they have to be specifi
Re: (Score:2)
And you are using them wrong. Maybe look them the fuck up, you pour excuse fro a limp wrested cum stain.
See, I can call people names to!
Anyway, the only reason you are anger is becasue you are wrong, have no argument, and so use name calling.
Re: (Score:2)
Glorified table lookup is the human brain.
A NEST is conscious, maybe you don't know the definition?
consciousness /kän(t)SHsns/
noun
the state of being awake and aware of one's surroundings.
Do you mean sentience?
And if is, why do you believe sentience is necessary for intelligence?
Sentence being a form or emotion, is in no way needed for intelligence.
"If you can't even be honest and precise with definitions"
we can, and that how we know you are wrong.
Re: (Score:2)
Glorified table lookup is NOT A.I.
It's called weak AI (or narrow AI). Check it out [wikipedia.org]. Weak AI basically is a term developed to mean, "Anything useful we discovered while looking for general AI, but is not general AI."
Re: (Score:2)
A. Would you have accepted it just as easily if they called it "magic"?
2. It does not have a major impact on my life. Nobody from the government is bugging me, and what I've been buying and how much I pay is affected by many variables which existed long before the current incarnations of "AI" and "big data".
Re: There is no AI (Score:1)
Ouch, my mama is hurt that you didn't have anything to say about her.
Re: (Score:3)
The article is talking about *something*, which certainly does exist. It's a way to direct computers that's different than the traditional approach of humans writing individual instructions into text files.
This is way too nebulous. Completely meaningless in fact. People have been "directing" computers and technology more generally at much higher levels than "writing individual instructions into text files" for countless decades.
The industry-standard term for this thing is "AI", although certainly isn't any kind of human or animal "intelligence", but neither of those things really matter.
What matters is that "AI" is by itself a completely meaningless term because it conveys no useful information. Simply way too broad to be useful.
What does matter is that it exists, and that it's already successfully employed in widespread use; maybe most notably in allowing corporations and governments to do Orwellian surveillance of the public on a mass scale. It already has a major impact on your life.
The fact there is an industry that exists to conduct mass stalking and profit from "AI" conveys nothing when everything is labeled "AI".
Yes there is. (Score:3, Insightful)
AI exists, and has for a long time. It just doesn't mean what you think it means.
And you lost this battle long before it began. The rest of the world doesn't care about your so-high-it's-useless bar. And articles about AI will continue to be written, and discussions about AI will continue to be had, despite your requests that they stop.
You are just spitting into the wind, at this point.
Re: (Score:2)
Yes, it does, dumb dumb. :"That doesn't count, it's just math"
The bar keeps moving. I have a device in my pocket, the when described to any AI researcher in 1970 that would say it's AI.
But, as we create algorithms that can do thing previous thought as 'only humans can do' the AI bar gets moved becasue people are like
We have machines the can look at data, and create a new algorithm to predict future results. In some case, we don't understand how it works, but the prediction are spot on.
We have application wo
The future tends to arrive sooner than we expect (Score:3)
This may all sounds far out, but the future tends to arrive sooner than we expect.
That doesn't sound right. Especially when it comes to AI.
Re: (Score:3)
This may all sounds far out, but the future tends to arrive sooner than we expect.
That doesn't sound right. Especially when it comes to AI.
It'll be like practical fusion. As each decade passes, we'll be just 40 years away.
Re: (Score:2)
So .. you're saying that AI-generated fusion power should be here next week?
Re: The future tends to arrive sooner than we expe (Score:4, Funny)
Re: (Score:2)
If the future arrived sooner than I expected, then I would already be navigating code in 3-D, literally walking through it, touching parts of it, and following where the paths lead. We were supposed to have escaped the 2-D limitation, oh, about decade ago.
Re: (Score:1)
If the future arrived sooner than I expected, then I would already be navigating code in 3-D, literally walking through it, touching parts of it, and following where the paths lead. We were supposed to have escaped the 2-D limitation, oh, about decade ago.
3-D? I thought we were still busy making everything flat?
Re: (Score:1)
Good point. Windows 3.1 was more 3-D than Windows or the web is today...
Re: (Score:2)
LOL. Your version of the future, isn't the future. I think no one predicting cell phones underscores that, or predicting computers...but how limited they were in their thinking.
"then I would already be navigating code in 3-D"
Stupidest fucking way to go through code. This is like in the 90s when everyone thought VR office meant pretending to go through desk drawer and move as if you were in an office. How wasteful is that?
That said, you can model code in 3d. The only think left to do is import it into a VR s
Re: (Score:1)
This wasn't necessarily for navigating individual calls; that's more quickly done by choosing your IDE's "goto definition" or "goto reference" function. This was for the visually-oriented (whatever fraction of us that is), and for getting an overview of a system, i.e., answering higher-level questions about the structure of code. It wouldn't fill a need for everyone, and that's fine. But it was an idea being bandied about 25 or so years ago.
and just think... (Score:3)
Re: (Score:2)
Yeah, that part got me....
I really would like some practical AR, but I do NOT wanting it reading my biometrics, emotions, etc....I don't want it watching and learning ME and doing God knows what with that data.
I'll not use it if that is what is required.
Re: (Score:2)
It will. No stopping it. It might just be a little data, from many different sources, but they will get it all.
I think we need to spend less energy fighting the inevitable*, and more energy fight for Constitutional right protecting are data and personal information.
Thew fact that someone may never join facebook, but facebook will still have a pretty complete data set about that person is criminal.
*it inevitable because we are doing it to ourselves because there are very useful reason for it.
Reminds me of Brenda Laurel's book (Score:3)
Computers as Theater.
https://books.google.com/books... [google.com]
https://en.wikipedia.org/wiki/... [wikipedia.org]
It's an analysis of computer-human interaction as seen through the lens of Aristotelian poetics. Very good stuff, don't take Microsoft's bastardization of this work (Clippy) as evidence of the failure of the theory.
Will make it difficult to communicate... (Score:2)
Re: (Score:2)
"When AR wearables hit the market, our apps will start tracking both our conscious and subconscious behavior."
I'm wondering who would wear such a thing? And why?
I don't even wear a watch, it's no longer a useful thing for me to wear, so I gave up on them about a decade ago.
If I want to know the time, I look at my desk phone (at work) or computer (at work or home) or mobile (when out and about).
Why would I want to wear something that I already have in my pocket? Unless the wearable is smaller, lighter, cheap
AI will not adapt but modify (Score:4, Insightful)
And armed with this information, our interfaces will morph and adapt to our mood as we go about our day. Future interfaces will not be curated, but tailored to fulfill our subconscious needs. Maybe the best way to navigate a digital ecosystem isn't through buttons and sliders. Maybe the solution is something more organic and abstract.
If things get to this point, AI(or machine learning, etc) will not adapt, but will modify our mood. Look at what social media has already done and continues to do, regarding human behavior.
AI will control human behavior.
There will be no 'adapt to our mood'.
How naive.
Re: (Score:2)
I really think we'll have a direct Brain/Computer Interface long before this becomes reality
Maybe... (Score:2)
But given that UI's have mostly only regressed for more than a decade (mostly for the sake of accommodating poor undersized screens and interfaces I'm skeptical.
Re: (Score:2)
Re: (Score:2)
From what I read and hear from friends, the classic game interface of joystick is long gone.
I used to play most Valve games with mouse + joystick, and enjoyed them greatly.
Then, iirc, Win7 replaced XP and nope, no longer possible, something was broken somewhere and even though my joystick worked fine in Windows, it failed to work in any Valve game.
So I stopped playing Valve games.
And haven't bought any since.
Interfaces really need to cater for previous generations of users if they want to succeed, I believe
Such projected/imagined AI would make UI obsolete (Score:3)
These are very high-mind and visionary goals for the capabilities of artificial intelligence. Time will tell whether the goals are visionary in the sense of prophetic or visionary in the sense of quixotic.
However, the consequences of an all-knowing AI as described in the article are unlikely. Either the all-knowing AI will be truly omniscient and obviate the need for any UI because AI processing will replace both the UI and the human thought and interaction behind the UI, or the all-knowing AI will never reach such lofty goals of affecting UIs. The middle ground of being all-knowing but not too all-knowing will be tricky to achieve.
Re: (Score:2)
I rather think it will be like Prof. Trelawney of Hogwarts. For most of the time it will attempt to act like it is all seeing and fail miserably. Every now and again, it will go into a trance and predict everything you want to do perfectly. However, it will then come out of a trance and never recall anything it did.
Re: (Score:2)
" AI will be truly omniscient"
No it won't. That is a concept by weak minded people who think if THEY don't understand it, it must be god like.
No different then some 12th century rube thinking you are a god becasue you want back in time with a cell phone.
Energy requirement alone mean it won't be omniscient , not to get into size and the fact by definition, omniscient also mean constantly expanding to acquires more new data.
"e AI processing will replace both the UI and the human thought "
OK, I don't want to
Is it fairy tale day again? (Score:2)
This is nonsense straight out of a "tech fantasy" story of bad quality. I cant wait for the utterly non-intelligent AI hype to die down.
Re: (Score:2)
Re: (Score:2)
Hmm. Is there an option to filter the complete morons among the "editors"?
Re: (Score:2)
WOO WOOOO! All aboard the fake-ass-AI hype-train! (Score:2)
Know all I really want in any user interface? A gods-be-damned dark theme or at least the ability to make it so. Tired of my eyes feeling like they're going to start bleeding every second I have to use some things because everything has to be BRIGHT BRIGHT BRIGHT!
Can I get a "Hell, yeah!" from y'all on that?
Re: WOO WOOOO! All aboard the fake-ass-AI hype-tra (Score:2)
Re: (Score:2)
Well, you are demonstrably wrong. But keep screaming like an ignorant fool who refused to believe with earth is a spheroid.
Re: (Score:2)
Absolutely terrible (Score:3)
Let's keep this in the sci-fi novellas, shall we? This idea is terrible for more reasons than I can count (how do you provide support for someone in a system when you don't know the commands and interactions that are custom only to that person?). The author goes from AI to... autodesk generated 3D models? They throw out a couple paragraphs of fantasy and here we are talking about it on Slashdot.
I've posted about this many, many times, so I'm not going into a huge amount of detail, but I will at least summarize. We live in a physical word that our bodies are designed to interact with using all our senses. Thus we interface optimally with objects that exist in our world that provide a gateway into a virtual construct. A computer keyboard is the perfect example. It is real, and I can use my eyes *and* my sense of touch to interact with it. My fingers find the correct keys because my brain is very accustomed to this physical object, and I can do it purely by touch alone. Since it is real, I can interact with it in the ways my body naturally does with anything else around me. Each button does a specific thing, so I can interface with it without having to seek some other feedback (such as visual cues) to know I have selected the right button or pressed it hard enough, etc.
The mouse is the opposite. It provides an interface to a virtual reality construct in the digital world - a pointer. To use this pointer I have to interact with the mouse, coordinate my visual senses with where the virtual object is on the screen, then I can coordinate the two actions (moving the mouse verses how the pointer moves in response) until I have manipulated the virtual pointer in the desired way. So why do we not use the mouse to enter text into a computer? For the same reason that the concepts suggested in this article are stupid.
This guy has watched Jonny Mnemonic a few times too many... https://www.youtube.com/watch?... [youtube.com]
Re: (Score:2)
Plausible (Score:2)
Even if you discount such difficult to measure metrics like pupil dilation and other stuff, this is straightforward.
You can make available to an AI agent a list of all the controls and data types the underlying app has. The agent can design various UIs that are variants on existing "good" UIs the agent has seen. (Generative neural network). The AI can have a predictive model of a typical user and measure the predicted performance of this UI. (How often does a user touch the wrong button? How many click
Old/new. (Score:2)
You mean, it will look basically the same (keyboard) with different tech behind it?
Re: Old/new. (Score:1)
It can be refreshing to type on an old typewriter. It all goes directly to the paper you put in the typewriter Nothing is stored away somewhere to be lost.
Re: (Score:2)
and it doesn't spy on me, either.
Re: (Score:2)
That's pretty much what I was going to say. You'll look at the typewriter and say "remember when machines did only what you told them? Those were good times."
Buzzword bingo bullshit article (Score:2)
I don't want AI to modify the interface (Score:2)
I want AI to be the interface. At least some of the time, like when I'm driving and don't want to touch a screen or keyboard. That's when I want Jarvis.
When I'm trying to do something super specific (and likely a one-off) at my desk, I might want AI to help with complicated things but I sure don't want it adjusting the UI on the fly. I rely on muscle memory way too much for that to be anything but annoying. But I recognize I'm also a nerdy software developer who likes remembering which control does what.
Wha
hmm (Score:1)
Oh boy! (Score:2)
When AR hits the market ... (Score:2)
Probably near the end of round 2.
AI generated Christmas e-cards (Score:1)
I have an AI UI (Score:2)
Or ... (Score:2)
Re: (Score:2)
My ex-wife would never be able to make or take a call.
Re: (Score:2)
Gee, I can't imagine why she dumped you.
Re: (Score:2)
Gee, I can't imagine why she dumped you.
She didn't, I dumped her.
I kept the house, my car, and my son, so in the end it all worked out fine.
Might be better....might (Score:2)
Hmmm, I'm skeptical, but who knows, they might be better....might. I'll be interested to see if this technique produces better interfaces. Right now I'd give it a 50/50 chance.
All intelligence is artificial (Score:2)
All intelligence is artificial. Currently it seems to be confined to biological entities, but I don't think that'll be the case forever.
Machine Learning and AI will go hand-in-hand, and my guess is that the results will appear gradually at first and but better pretty quickly. It won't be like you think it will; what initially arises will be embedded as feedback and control services, won't be like some glorified version of Alexa.
After a while it'll hardly matter if it's genuinely "intelligent" or not (whate
How AI Will Eat U! (Score:4, Funny)
Putting a dot under the last letter would get more clicks.
All I want... (Score:2)
...is to be able to give the screen the finger and have the topmost pop up or active tab or window close, that would make me happy.
A movie about this (Score:1)
That's not looking good for psychometric devices like polygraphs and fMRI which only attempt to answer the question 'Are you afraid (of the truth)?' The obvious problem is that we spend so much time thinking about our physical selves: sex, food, sleep, pain, boredom and frustration; most are are difficult to satisfy via on-line purchases.
Our moods are infinitely variable and worse; sexist. Modern culture encourages women to vary their mood and express it somehow, usually, poorly. Men are treated the exa
Just like I'm so proud when using no-touch faucets (Score:2)
A few times a week, I'm standing in front of the sink summoning daemons, but there's no water to be had. But all the soap I want! Or vice versa. If only there were some way to control such things!
Fortunately, we have a fix to that problem - in the future, the automated bathroom door won't even let me in! I'm sure we'll develop a fine social code for reaching out to each other for help. Like maybe you get me access to the restroom, and I'll convince the elevator to take you to the third floor. Barb dow
Dynamics (Score:2)
Erm, not really (Score:2)
Not so sure about that.
Interfaces are already annoying enough when they morph and change all the time, supposedly based on tailoring themselves to your previous usage.
It's probably always going to make sense to limit user choices to things that actually are implemented in the workflow, to make things discoverable in a logical and repeatable way, and to provide a standardized way of making the machine do things instead of relying on it guessing properly while giving you an ever changing interface.
Neural net for a dynamic UI is overkill. (Score:1)
But dynamic UI already exist that change based on what the user commonly click on. Not extensively used though because test group users get confused when items move around based on usage. Sigh.
Using a neural net to track the users behavior and guess what the user wants seems overkill. And got the same issue with confusion.
Although if you apply it to the browser the neural net (NN) could easily give suggestions on the start page based on surfing habit and the time. For instance if you usually surf porn on a
Re: (Score:1)
Btw a small example is Windows 7 start menu. It have a small section that is populated based on usage.
Although it is a buggy example, items get stuck.