Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI

How AI Will Eat UI (artyomavanesov.com) 110

The inevitable day when machines learn to design our apps. From a report: When AR wearables hit the market, our apps will start tracking both our conscious and subconscious behavior. By measuring our heart rate, respiration, pupil size, and eye movement, our AIs will be able to map our psychology in high resolution. And armed with this information, our interfaces will morph and adapt to our mood as we go about our day. Future interfaces will not be curated, but tailored to fulfill our subconscious needs. Maybe the best way to navigate a digital ecosystem isn't through buttons and sliders. Maybe the solution is something more organic and abstract.

Autodesk is developing a system that uses Generative Design to create 3D models. You enter your requirements, and the system spits out a solution. The method has already produced drones, airplane parts, and hot rods. So it's only a matter of time before we start seeing AI-generated interfaces. This may all sounds far out, but the future tends to arrive sooner than we expect. One day, in a brave new world, we will look at contemporary interfaces the same way we look at an old typewriter; gawking at its crudeness and appreciating how far we've come.

This discussion has been archived. No new comments can be posted.

How AI Will Eat UI

Comments Filter:
  • by DarkRookie2 ( 5551422 ) on Tuesday December 17, 2019 @12:32PM (#59529048)
    That they will be better designed than most of the current ones.
    • The problem with most UI designes are the following.
      1. They are not customizable to everyone workflow, so no one really gets the optimal UI, they just the same workflow for different jobs.
      2. They are so customizable that they cannot give good defaults.
      3. The workflow usage always changes, but code wants to be static.

      • I really think a lot of the progress in the next decade will be on conversational interactions with devices, rather than better iterations of WIMP.
        • conversational

          Needs to go a step beyond that.

          The US needs to infer anger and frustration on the part of the user.

          If I keep giving my phone a dirty look every time it gives me a forth and fifth equivalent of an "are you sure" modal dialog, it should stop doing so. On the other hand, if my mom expresses relief that she's saved by those confirmation requests it should continue to give them to her.

          • I'm pretty sure that the US already infers anger and frustration on the part of the user (i.e., US citizens)

        • by cayenne8 ( 626475 ) on Tuesday December 17, 2019 @01:18PM (#59529218) Homepage Journal

          I really think a lot of the progress in the next decade will be on conversational interactions with devices, rather than better iterations of WIMP.

          I dunno.

          Almost nothing infuriates me more, than calling somewhere commercial for support, and having that stupid PHONE robot that insists you talk to them, rather than just let you push number buttons.

          I especially hate that in an office setting, or our and about around other folks....

          And, they never fucking get it right when talking it seems....

          I don't want to talk to my devices....I want a button or prompt to push/touch....or even a gesture maybe, I hate trying to talk to a machine.

          Hell, even at home, I get to the point of just repeating over and over "get me a fucking operator"...and it finally gets me to a real person.

          • Hence the need for all the progress : )

            I might be wrong about this, but I still imagine really powerful people at the top of the pyramid barking out orders more than typing and pointing and clicking. And so I gather that's what people really want, if technology can give it to them.

            Also it's easy to imagine a new interface directly replacing an old one which is never quite right. You're never going to say to your car, "turn on the right blinker" instead of pulling the turn signal lever. But you probab

          • Hear, hear.
            Same here, I never got accustomed to talking to a device. Maybe if it evolves to the point where I could talk to it like I talk to a human being, but until then any attempt falls straight to the bottom of the uncanny valley for me.

          • I really think a lot of the progress in the next decade will be on conversational interactions with devices, rather than better iterations of WIMP.

            I dunno.

            Almost nothing infuriates me more, than calling somewhere commercial for support, and having that stupid PHONE robot that insists you talk to them, rather than just let you push number buttons.

            I especially hate that in an office setting, or our and about around other folks....

            And, they never fucking get it right when talking it seems....

            I don't want to talk to my devices....I want a button or prompt to push/touch....or even a gesture maybe, I hate trying to talk to a machine.

            Hell, even at home, I get to the point of just repeating over and over "get me a fucking operator"...and it finally gets me to a real person.

            Yeah, exactly. Sometimes other people are asleep in your home, so you want to do things quietly? Or sometimes other people are having conversations, or watching TV, or doing homework, and would just as soon not have to listen to you loudly repeating when you want to pick up your prescription or whatever, hoping that the bot will understand?

            It's especially infuriating when the business previously had a perfectly good push button menu.

        • by BranMan ( 29917 )

          God I hope not. I read an article on comparative language data rates - how many bits per second of information can you actually convey with each different language - German, English, Spanish, Italian, Chinese, etc.

          They found they all ended up at a rate of 39 bits per second. Conversational interfaces are incredibly slow - and that's person-to-person. Person-to-machine will be less, always.

          While it has flexibility going for it, speech is just too dang slow for any useful work.

      • The problem with most UI designs is that the were made with one of three motivations: An art major completely disregarding actual usability in favor of their "artistic vision", an art major deliberately making something hard and painful to use as a "statement", or a company trying to cripple the user as much as possible.

        • I made a web app that had a UI that mimicked a command line program. Asking one question doing stuff ask an other. All the IT people hated it, heck I didn't like it much, but that method seem to help solve what the users were asking for. The Users loved it as it was easier for them to use.
          Much of the Problems with UI falls we are making something to do something new, but relying on an old interface methods to do it.

        • To be fair it is really hard to define good UI. I loathed doing website design because there is no theory. Of course it should look good on different screens, but ultimately use should define the position of everything to minimize use time and maximize probability of user getting what they want by poking at obvious things. But The vast majority of things CSS can do should not be done.

      • +1 This is exactly what I thought
  • by Quakeulf ( 2650167 ) on Tuesday December 17, 2019 @12:34PM (#59529056)
    Called parametric design [wikipedia.org].
  • by doom ( 14564 ) <doom@kzsu.stanford.edu> on Tuesday December 17, 2019 @12:35PM (#59529062) Homepage Journal
    The AIs generate a UI that looks exactly like Wordstar.
    • ...or EMACS.

      (...what?)

      • by Shotgun ( 30919 )

        If there is any 'I' in the AI at all, it will look like vi.

      • by doom ( 14564 )

        Seriously, much as I like emacs, it's default interface is clearly "evolved" rather than designed-- it's a mess that people like myself like because we've gotten used to it-- and because we like hacking on it to tweak the details.

        Wordstar actually did an excellent job of combining the power of a keyboard interface with the discoverability of menu driven designs-- therefore, it was forgotten completely as everyone ran off chasing new bright and shiney's.

  • There is no AI (Score:4, Insightful)

    by geek ( 5680 ) on Tuesday December 17, 2019 @12:35PM (#59529064)

    It doesn't exist. Now stop writing dumb ass articles about it. Thanks

    • Re: (Score:2, Insightful)

      by Waffle Iron ( 339739 )

      It doesn't exist. Now stop writing dumb ass articles about it. Thanks

      There's no need to get hung up on semantics.

      The article is talking about *something*, which certainly does exist. It's a way to direct computers that's different than the traditional approach of humans writing individual instructions into text files. The industry-standard term for this thing is "AI", although certainly isn't any kind of human or animal "intelligence", but neither of those things really matter.

      What does matter is that it exists, and that it's already successfully employed in widespread use;

      • You don't need any sort of computer ML/AI/whatever pseudo intelligence to have a police state.
      • Re:There is no AI (Score:4, Insightful)

        by UnknownSoldier ( 67820 ) on Tuesday December 17, 2019 @01:37PM (#59529296)

        Glorified table lookup is NOT A.I.

        Stop buying into bullshit definitions (and articles.)

        These articles are about "a.i." -- artificial ignorance -- where they appear have to pseudo intelligence but have no ability to reason and are basically fucking stupid about everything that except one tiny little sliver of a topic that has crunched numbers to make it appear it knows what it is doing.

        Until there is an _actual_ test for consciousness the term A.I. is a bullshit term; this why I prefer the accurate acronym: a.i.

        The term ML, Machine Learning, is a little more honest, but hijacking an existing term, Machine Language, is myopic and creates confusion.

        If you can't even be honest and precise with definitions then there is little point in discussing anything afterwards since obviously you don't value honesty, clarity, or precision.

        • Not only are you hung up on semantics, you've got your panties in a wad about it.

          Maybe it would be better for you to direct your anger at the manufacturers of Grape Nuts. At least you'd only have to persuade one party to change their inaccurate terminology instead of an entire industry.

          • Re: (Score:1, Flamebait)

            by geek ( 5680 )

            They aren't semantics. They are words and they have meaning you insufferable fucking cunt

            • Let's try to keep the discussion civil please.

              Ads hominems don't solve anything.

              Parent is probably not even a programmer which is why they are ignorant about how "AI" even works and keeps whining about semantics.

              • Um, I was programming back in the 80s when I briefly studied in the field of AI during the original "boom". (That was when some academic eggheads thought that since Lisp macros allow you to write self-referential code, they had "Intelligence"!). I quickly concluded it was all BS because: a) Lisp is simply another computer language, and 2) you can't pipe human brain levels of pattern processing through a single 16-bit accumulator.

                However, available since CPU power, storage and memory have each increased by 6

                • by geekoid ( 135745 )

                  1) The language doesn't matter.
                  2) Human bairn level processing isn't needed for artificial intelligence. In fact, with one exception, there is no reason to think it would be.
                  The one exception being if we create something by backward engineering the human brain, literally.In that case the end result would very likely look like 'brain level processing'

                  Yes, we have systems that are intelligent.

                  intelligence /inteljns/
                  Learn to pronounce
                  noun
                  1.
                  the ability to acquire and apply knowledge and skills.

                  We have sytem th

              • by rtb61 ( 674572 )

                They are a relational table lookup, the lookup defined by the relational nature of the query to a series of database table, linked together by the query and the query itself altered by it's progress through those various table taking into account past queries and the success of failures those result produced when applied and this use to alter the nature of future similar queries. The relational part and query alteration is important and how well that happens, how well the 'machine' reasons, will define how

            • That's amazing. (Score:5, Insightful)

              by Brain-Fu ( 1274756 ) on Tuesday December 17, 2019 @02:32PM (#59529484) Homepage Journal

              You just said "they aren't semantics, they are words and they have meaning."

              "Semantics" is also a word, with a meaning. Specifically: "the branch of linguistics and logic concerned with meaning."

              So, your statement is directly self-contradictory.

              And, since you agree that words have meanings, here is the actual meaning of "Artificial Intelligence" according to the dictionary: [merriam-webster.com]

              the capability of a machine to imitate intelligent human behavior

              I emphasized the word "imitate" because that is very relevant to this discussion. Machines "imitate" intelligent behavior, because they aren't actually intelligent. That is what "imitate" means, you see. Something is being faked. In this case, intelligence.

              People keep insisting "it is not AI because it is not actually intelligent!" But that's the point. It isn't supposed to be actually intelligent. Not, at least, according to the actual definitions of the words being used here.

              Perhaps you are thinking of something like "machine intelligence" or "synthetic intelligence," neither of which exist today nor are we anywhere close. But if we are talking about "artificial" intelligence, then we are talking about unintelligent machines doing specific things that usually require intelligence to do. That is to say, simple algorithms are used to generate the illusion of intelligence (which isn't actually there). Or, to state it in one word, "imitation."

              • That's a bad definition. We have a name for that, and it is "simulated intelligence", or SI. Artificial intelligence is what actual intelligence assembled by humans is meant to be called. In itself, machine learning is just a strategy for one or both of these things, and not directly comparable to either.

                • by geekoid ( 135745 )

                  Look up the word intelligence. Then look at many modern computer systems. They do the same thing.

                  Calling it machine learning is no different then calling the brain 'cell learning'
                  I can not stress this enough. It's artificial Intelligence, not Artificial Emotions.*
                  Remove all the emotional component to your meaning of intelligence.

                  *Which is good, because we don't want the system calculating the debt to have a bad hair day!

                  • "the ability to acquire and apply knowledge and skills."

                    Computers are good at acquiring knowledge. But they're not good at making sense of it. Neither are we at first. We have to learn how to learn, and then we have to learn how to be sensible. Many humans never become very intelligent, either. But computers don't learn how to learn new things. They are still limited to learning what they've been taught to learn. Most of the things we call AI aren't able to acquire new skills at all, they have to be specifi

            • by geekoid ( 135745 )

              And you are using them wrong. Maybe look them the fuck up, you pour excuse fro a limp wrested cum stain.

              See, I can call people names to!

              Anyway, the only reason you are anger is becasue you are wrong, have no argument, and so use name calling.

        • by geekoid ( 135745 )

          Glorified table lookup is the human brain.
          A NEST is conscious, maybe you don't know the definition?

          consciousness /kän(t)SHsns/

          noun
          the state of being awake and aware of one's surroundings.

          Do you mean sentience?

          And if is, why do you believe sentience is necessary for intelligence?

          Sentence being a form or emotion, is in no way needed for intelligence.

          "If you can't even be honest and precise with definitions"
          we can, and that how we know you are wrong.

        • Glorified table lookup is NOT A.I.

          It's called weak AI (or narrow AI). Check it out [wikipedia.org]. Weak AI basically is a term developed to mean, "Anything useful we discovered while looking for general AI, but is not general AI."

      • A. Would you have accepted it just as easily if they called it "magic"?
        2. It does not have a major impact on my life. Nobody from the government is bugging me, and what I've been buying and how much I pay is affected by many variables which existed long before the current incarnations of "AI" and "big data".

      • The article is talking about *something*, which certainly does exist. It's a way to direct computers that's different than the traditional approach of humans writing individual instructions into text files.

        This is way too nebulous. Completely meaningless in fact. People have been "directing" computers and technology more generally at much higher levels than "writing individual instructions into text files" for countless decades.

        The industry-standard term for this thing is "AI", although certainly isn't any kind of human or animal "intelligence", but neither of those things really matter.

        What matters is that "AI" is by itself a completely meaningless term because it conveys no useful information. Simply way too broad to be useful.

        What does matter is that it exists, and that it's already successfully employed in widespread use; maybe most notably in allowing corporations and governments to do Orwellian surveillance of the public on a mass scale. It already has a major impact on your life.

        The fact there is an industry that exists to conduct mass stalking and profit from "AI" conveys nothing when everything is labeled "AI".

    • Yes there is. (Score:3, Insightful)

      by Brain-Fu ( 1274756 )

      AI exists, and has for a long time. It just doesn't mean what you think it means.

      And you lost this battle long before it began. The rest of the world doesn't care about your so-high-it's-useless bar. And articles about AI will continue to be written, and discussions about AI will continue to be had, despite your requests that they stop.

      You are just spitting into the wind, at this point.

    • by geekoid ( 135745 )

      Yes, it does, dumb dumb.
      The bar keeps moving. I have a device in my pocket, the when described to any AI researcher in 1970 that would say it's AI.
      But, as we create algorithms that can do thing previous thought as 'only humans can do' the AI bar gets moved becasue people are like :"That doesn't count, it's just math"

      We have machines the can look at data, and create a new algorithm to predict future results. In some case, we don't understand how it works, but the prediction are spot on.

      We have application wo

  • by Layzej ( 1976930 ) on Tuesday December 17, 2019 @12:41PM (#59529084)

    This may all sounds far out, but the future tends to arrive sooner than we expect.

    That doesn't sound right. Especially when it comes to AI.

    • This may all sounds far out, but the future tends to arrive sooner than we expect.

      That doesn't sound right. Especially when it comes to AI.

      It'll be like practical fusion. As each decade passes, we'll be just 40 years away.

    • If the future arrived sooner than I expected, then I would already be navigating code in 3-D, literally walking through it, touching parts of it, and following where the paths lead. We were supposed to have escaped the 2-D limitation, oh, about decade ago.

      • If the future arrived sooner than I expected, then I would already be navigating code in 3-D, literally walking through it, touching parts of it, and following where the paths lead. We were supposed to have escaped the 2-D limitation, oh, about decade ago.

        3-D? I thought we were still busy making everything flat?

      • by geekoid ( 135745 )

        LOL. Your version of the future, isn't the future. I think no one predicting cell phones underscores that, or predicting computers...but how limited they were in their thinking.

        "then I would already be navigating code in 3-D"
        Stupidest fucking way to go through code. This is like in the 90s when everyone thought VR office meant pretending to go through desk drawer and move as if you were in an office. How wasteful is that?

        That said, you can model code in 3d. The only think left to do is import it into a VR s

        • This wasn't necessarily for navigating individual calls; that's more quickly done by choosing your IDE's "goto definition" or "goto reference" function. This was for the visually-oriented (whatever fraction of us that is), and for getting an overview of a system, i.e., answering higher-level questions about the structure of code. It wouldn't fill a need for everyone, and that's fine. But it was an idea being bandied about 25 or so years ago.

  • by roc97007 ( 608802 ) on Tuesday December 17, 2019 @12:46PM (#59529102) Journal

    ...how much all that biometric information will be worth!

    • ...how much all that biometric information will be worth!

      Yeah, that part got me....

      I really would like some practical AR, but I do NOT wanting it reading my biometrics, emotions, etc....I don't want it watching and learning ME and doing God knows what with that data.

      I'll not use it if that is what is required.

      • by geekoid ( 135745 )

        It will. No stopping it. It might just be a little data, from many different sources, but they will get it all.
        I think we need to spend less energy fighting the inevitable*, and more energy fight for Constitutional right protecting are data and personal information.
        Thew fact that someone may never join facebook, but facebook will still have a pretty complete data set about that person is criminal.

        *it inevitable because we are doing it to ourselves because there are very useful reason for it.

  • Computers as Theater.

    https://books.google.com/books... [google.com]

    https://en.wikipedia.org/wiki/... [wikipedia.org]

    It's an analysis of computer-human interaction as seen through the lens of Aristotelian poetics. Very good stuff, don't take Microsoft's bastardization of this work (Clippy) as evidence of the failure of the theory.

  • ...about the user experience with another user, since their experiences are different. I already struggle talking to my coworkers about certain IDE features, because of the highly personalized keyboard shortcuts for each feature.
    • "When AR wearables hit the market, our apps will start tracking both our conscious and subconscious behavior."

      I'm wondering who would wear such a thing? And why?

      I don't even wear a watch, it's no longer a useful thing for me to wear, so I gave up on them about a decade ago.

      If I want to know the time, I look at my desk phone (at work) or computer (at work or home) or mobile (when out and about).

      Why would I want to wear something that I already have in my pocket? Unless the wearable is smaller, lighter, cheap

  • by Sir_Eptishous ( 873977 ) on Tuesday December 17, 2019 @12:57PM (#59529136)

    And armed with this information, our interfaces will morph and adapt to our mood as we go about our day. Future interfaces will not be curated, but tailored to fulfill our subconscious needs. Maybe the best way to navigate a digital ecosystem isn't through buttons and sliders. Maybe the solution is something more organic and abstract.

    If things get to this point, AI(or machine learning, etc) will not adapt, but will modify our mood. Look at what social media has already done and continues to do, regarding human behavior.

    AI will control human behavior.
    There will be no 'adapt to our mood'.
    How naive.

    • by cusco ( 717999 )

      I really think we'll have a direct Brain/Computer Interface long before this becomes reality

  • But given that UI's have mostly only regressed for more than a decade (mostly for the sake of accommodating poor undersized screens and interfaces I'm skeptical.

    • If the AI is actually intelligent it will generate "classic" interfaces for mouse and keyboard usage and not the "mobile-first" atrocities we're forced to use nowadays
      • From what I read and hear from friends, the classic game interface of joystick is long gone.

        I used to play most Valve games with mouse + joystick, and enjoyed them greatly.

        Then, iirc, Win7 replaced XP and nope, no longer possible, something was broken somewhere and even though my joystick worked fine in Windows, it failed to work in any Valve game.

        So I stopped playing Valve games.

        And haven't bought any since.

        Interfaces really need to cater for previous generations of users if they want to succeed, I believe

  • by larryjoe ( 135075 ) on Tuesday December 17, 2019 @01:04PM (#59529164)

    These are very high-mind and visionary goals for the capabilities of artificial intelligence. Time will tell whether the goals are visionary in the sense of prophetic or visionary in the sense of quixotic.

    However, the consequences of an all-knowing AI as described in the article are unlikely. Either the all-knowing AI will be truly omniscient and obviate the need for any UI because AI processing will replace both the UI and the human thought and interaction behind the UI, or the all-knowing AI will never reach such lofty goals of affecting UIs. The middle ground of being all-knowing but not too all-knowing will be tricky to achieve.

    • by gtall ( 79522 )

      I rather think it will be like Prof. Trelawney of Hogwarts. For most of the time it will attempt to act like it is all seeing and fail miserably. Every now and again, it will go into a trance and predict everything you want to do perfectly. However, it will then come out of a trance and never recall anything it did.

    • by geekoid ( 135745 )

      " AI will be truly omniscient"
      No it won't. That is a concept by weak minded people who think if THEY don't understand it, it must be god like.
      No different then some 12th century rube thinking you are a god becasue you want back in time with a cell phone.

      Energy requirement alone mean it won't be omniscient , not to get into size and the fact by definition, omniscient also mean constantly expanding to acquires more new data.

      "e AI processing will replace both the UI and the human thought "
      OK, I don't want to

  • This is nonsense straight out of a "tech fantasy" story of bad quality. I cant wait for the utterly non-intelligent AI hype to die down.

  • Fuck so-called fake-ass inaccurately-named 'AI' shit. I'm starting to get ice-cream headaches from it every time it gets mentioned.
    Know all I really want in any user interface? A gods-be-damned dark theme or at least the ability to make it so. Tired of my eyes feeling like they're going to start bleeding every second I have to use some things because everything has to be BRIGHT BRIGHT BRIGHT!
    Can I get a "Hell, yeah!" from y'all on that?
  • by Dan East ( 318230 ) on Tuesday December 17, 2019 @01:22PM (#59529232) Journal

    Let's keep this in the sci-fi novellas, shall we? This idea is terrible for more reasons than I can count (how do you provide support for someone in a system when you don't know the commands and interactions that are custom only to that person?). The author goes from AI to... autodesk generated 3D models? They throw out a couple paragraphs of fantasy and here we are talking about it on Slashdot.

    I've posted about this many, many times, so I'm not going into a huge amount of detail, but I will at least summarize. We live in a physical word that our bodies are designed to interact with using all our senses. Thus we interface optimally with objects that exist in our world that provide a gateway into a virtual construct. A computer keyboard is the perfect example. It is real, and I can use my eyes *and* my sense of touch to interact with it. My fingers find the correct keys because my brain is very accustomed to this physical object, and I can do it purely by touch alone. Since it is real, I can interact with it in the ways my body naturally does with anything else around me. Each button does a specific thing, so I can interface with it without having to seek some other feedback (such as visual cues) to know I have selected the right button or pressed it hard enough, etc.

    The mouse is the opposite. It provides an interface to a virtual reality construct in the digital world - a pointer. To use this pointer I have to interact with the mouse, coordinate my visual senses with where the virtual object is on the screen, then I can coordinate the two actions (moving the mouse verses how the pointer moves in response) until I have manipulated the virtual pointer in the desired way. So why do we not use the mouse to enter text into a computer? For the same reason that the concepts suggested in this article are stupid.

    This guy has watched Jonny Mnemonic a few times too many... https://www.youtube.com/watch?... [youtube.com]

  • Even if you discount such difficult to measure metrics like pupil dilation and other stuff, this is straightforward.

    You can make available to an AI agent a list of all the controls and data types the underlying app has. The agent can design various UIs that are variants on existing "good" UIs the agent has seen. (Generative neural network). The AI can have a predictive model of a typical user and measure the predicted performance of this UI. (How often does a user touch the wrong button? How many click

  • we will look at contemporary interfaces the same way we look at an old typewriter

    You mean, it will look basically the same (keyboard) with different tech behind it?

    • It can be refreshing to type on an old typewriter. It all goes directly to the paper you put in the typewriter Nothing is stored away somewhere to be lost.

      • by timelorde ( 7880 )

        and it doesn't spy on me, either.

      • That's pretty much what I was going to say. You'll look at the typewriter and say "remember when machines did only what you told them? Those were good times."

  • If I had a drink for every egregious use of a stupid buzzword in the summary I'd be in ER getting my stomach pumped. There were words. There was syntax. There was no semantics.
  • I want AI to be the interface. At least some of the time, like when I'm driving and don't want to touch a screen or keyboard. That's when I want Jarvis.

    When I'm trying to do something super specific (and likely a one-off) at my desk, I might want AI to help with complicated things but I sure don't want it adjusting the UI on the fly. I rely on muscle memory way too much for that to be anything but annoying. But I recognize I'm also a nerdy software developer who likes remembering which control does what.

    Wha

  • This makes about as much sense as a gingerbread man and cookie monster performing together, with the gingerbread man getting eaten at the end of the show. Seems I've seen stuff like this promised back in the old days of computing, when there was software that could code in BASIC.
  • Thinking about YouTube and its annoying commercial intrusions, I can't wait for AI advertising! I'll be driving down the road when my sunglasses will detect that I have been licking my lips and decide that I'm thirsty. Suddenly my view of the road is blocked by my glasses showing me a commercial for cold beverages available at the next truck stop. Or I'm on the couch next to my girlfriend, she kisses me, and the fitbutt on my wrist detects my elevated pulse. Suddenly it blares out an ad for "RAMIT" brand co
  • I predict the market will win by a knockout.

    Probably near the end of round 2.

  • It claims to 'evolve' designs. I still had to do quite a bit of clicking to make a card https://www.genolve.com/design... [genolve.com]
  • I have an AI UI on my phone, and it's constantly changing the icons on my phone to those I don't want to use at the time. Yes, it occasionally gets it right, but the inability to have a consistent UI prevents me not just from using it, but from learning it as well... And the worst is that when I minimize a playing youtube video and switch to another app, sometimes it will keep playing, sometimes it won't...
  • by PPH ( 736903 )

    ... your phone will monitor your mood and refuse to work unless you get over that pissy attitude.

  • Hmmm, I'm skeptical, but who knows, they might be better....might. I'll be interested to see if this technique produces better interfaces. Right now I'd give it a 50/50 chance.

  • All intelligence is artificial. Currently it seems to be confined to biological entities, but I don't think that'll be the case forever.

    Machine Learning and AI will go hand-in-hand, and my guess is that the results will appear gradually at first and but better pretty quickly. It won't be like you think it will; what initially arises will be embedded as feedback and control services, won't be like some glorified version of Alexa.

    After a while it'll hardly matter if it's genuinely "intelligent" or not (whate

  • by nospam007 ( 722110 ) * on Tuesday December 17, 2019 @05:36PM (#59530074)

    Putting a dot under the last letter would get more clicks.

  • ...is to be able to give the screen the finger and have the topmost pop up or active tab or window close, that would make me happy.

  • ... will be able to map our psychology in high resolution ...

    That's not looking good for psychometric devices like polygraphs and fMRI which only attempt to answer the question 'Are you afraid (of the truth)?' The obvious problem is that we spend so much time thinking about our physical selves: sex, food, sleep, pain, boredom and frustration; most are are difficult to satisfy via on-line purchases.

    ... morph and adapt to our mood ...

    Our moods are infinitely variable and worse; sexist. Modern culture encourages women to vary their mood and express it somehow, usually, poorly. Men are treated the exa

  • A few times a week, I'm standing in front of the sink summoning daemons, but there's no water to be had. But all the soap I want! Or vice versa. If only there were some way to control such things!

    Fortunately, we have a fix to that problem - in the future, the automated bathroom door won't even let me in! I'm sure we'll develop a fine social code for reaching out to each other for help. Like maybe you get me access to the restroom, and I'll convince the elevator to take you to the third floor. Barb dow

  • Just picking MS as an example - they have produced a boring AND slow UI in the form of Dynamics. It very much reminds me of when I was forced to use MS FrontPage (or indeed SharePoint). The frustrating thing is that MS are making billions from this rubbish.
  • Not so sure about that.

    Interfaces are already annoying enough when they morph and change all the time, supposedly based on tailoring themselves to your previous usage.

    It's probably always going to make sense to limit user choices to things that actually are implemented in the workflow, to make things discoverable in a logical and repeatable way, and to provide a standardized way of making the machine do things instead of relying on it guessing properly while giving you an ever changing interface.

  • But dynamic UI already exist that change based on what the user commonly click on. Not extensively used though because test group users get confused when items move around based on usage. Sigh.
    Using a neural net to track the users behavior and guess what the user wants seems overkill. And got the same issue with confusion.

    Although if you apply it to the browser the neural net (NN) could easily give suggestions on the start page based on surfing habit and the time. For instance if you usually surf porn on a

    • Btw a small example is Windows 7 start menu. It have a small section that is populated based on usage.
      Although it is a buggy example, items get stuck.

One person's error is another person's data.

Working...