Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI

The World's First Graphical AI Interface (fastcodesign.com) 54

FastCompany reports: Machine learning and artificial intelligence are so difficult to understand, only a few very smart computer scientists know how to build them. But the designers of a new tool have a big ambition: to create the Javascript for AI. The tool, called Cortex, uses a graphical user interface to make it so that building an AI model doesn't require a PhD. The honeycomb-like interface, designed by Mark Rolston of Argodesign, enables developers -- and even designers -- to use premade AI "skills," as Rolston describes them, that can do things like sentiment analysis or natural language processing. They can then drag and drop these skills into an interface that shows the progression of the model. The key? Using a visual layout to organize the system makes it more accessible to non-scientists.
This discussion has been archived. No new comments can be posted.

The World's First Graphical AI Interface

Comments Filter:
  • Windows Me (Score:5, Funny)

    by PopeRatzo ( 965947 ) on Thursday January 25, 2018 @04:05PM (#56002013) Journal

    Visual Basic for AI

    • by murdocj ( 543661 )

      VB was actually a decent language. Javascript, OTOH...

      • ...has lambdas and highly late-bound, which is much nicer. It has absolutely shitty scoping, though, which kind of leaves a nasty aftertaste. I still fail to understand how someone who wanted to bring Scheme into browsers could have botched scoping so badly.
        • "highly late-bound objects", sorry.
    • "Me"? You missed the opportunity to make a "Windows NN" joke...
  • by Anonymous Coward

    Give it a graphical interface and hand it over to a bunch of "UX engineers".

    Pretty soon it'll have a pastel Fisher-Price interface copied from Chrome.

  • by Anonymous Coward

    come on, world's first? There are many GUIs about these topics.

  • If it doesn't have all the possible skills, would it still be intelligent?

    Oh, I see, this is just in case you just want to build a PI (President 'Intelligence') where you'd just need a small subset of skills.

  • by rogoshen1 ( 2922505 ) on Thursday January 25, 2018 @04:27PM (#56002273)

    That sounds really great, but can it communicate with Serenity and let them know their show was murdered?

  • by 93 Escort Wagon ( 326346 ) on Thursday January 25, 2018 @04:33PM (#56002347)

    I know this!

  • Data Wrangilng (Score:5, Interesting)

    by bunyip ( 17018 ) on Thursday January 25, 2018 @04:55PM (#56002567)

    My experience is that getting the data ready for the "math" part is about 80% of the work, and you have to know something about the models to know how to prep the data. For example, a recent text classification I did with Python and Keras has about 10 lines of code to define, train and test the neural network - but a whole lot more code to extract the data I needed and then beat it into shape for the modeling step.

    That said, I'm quite happy with "Python for AI', as it's quick and simple to do things. Please don't make it suck like Javascript :-)

    • by jma05 ( 897351 )

      While all that is true, I would welcome a lot more advancement in visualization tools for deep learning.

      The current tools are inadequate to easily understand why the network is doing poorly, when it is.

      We need way more than the current tensorboard. If anyone knows of any practical visualization tools (not academic demos) that they use for their work, please share.

    • So not really different from using linear algebra to do stuff, right? Calling LAPACK is easy, filling out the matrix properly to express the problem is the fun part.
  • If it was really Artificial Intelligence you could just say "computer, do so and so" and it would figure out for itself how to do it.
    • To be quite honest that's how I feel about it. What we currently have isn't 'general AI', but this 'slightly smarter than just a plain old computer program' half-assed AI. When we actually understand what it is in our own biological brains that produces the phenomenons of true sentience, self-awareness, and consciousness, then we'll be able to design hardware and software that can likewise produce that. Until then we're stuck with these weak 'approximations' that just mimick some of the more trivial aspects
  • It bears a much closer resemblance to texture processing in 3d modeling apps.There may be other specific domains that have this sort of approach, but web development is certainly not one of them.

  • Machine learning and artificial intelligence are so difficult to understand, only a few very smart computer scientists know how to build them.

    Which, amplified by 'fantasy' input from television, movies, fiction, and the media, is probably why apparently the vast majority of people over-estimate the capabilities of so-called 'AI', attributing capabilities to it that it does not and cannot posess, and trust the technology way too much.

    The tool, called Cortex, uses a graphical user interface to make it so that building an AI model doesn't require a PhD.

    Great. Now we'll have unqualified people, with an almost total lack of understanding, believing they can be 'AI Scientists', creating even more half-assed excuses for AI. Wonderful.

    Coming soon to a Best Buy near yo

  • Well, this is total bullshit:

    Machine learning and artificial intelligence are so difficult to understand, only a few very smart computer scientists know how to build them.

  • Clueless Vaporware (Score:5, Interesting)

    by Anonymous Coward on Thursday January 25, 2018 @07:28PM (#56004023)

    I see pictures with too little resolution to read almost all of their text. I see no links to any demos. I see an UI which is claimed to be awesome but instead draws linkage lines directly on top of the text of other elements. I see an app store where they can rake in a ton of profit. I expect this system to be hyped like crazy and then never be usable.

    The author (or the company's media folks) know so little about AI that they mistake the categories of AI for actual things which can be built. You can't build "machine learning" but you can build something which uses machine learning or a framework for doing ML. The author isn't an expert in this field, so their claim of the first GAI is completely untrustworthy. They're also using the term incorrectly. A Graphical AI Interface would be a graphical interface which is used by an AI, not an interface to an AI system. Just like GUIs are graphical interfaces used by a user, not an interface to the user.

    There is nothing good in the article. It's all fluf, buzz words, and misdirection with no proof of any of it.

  • What these dudes making AI with a PhD are doing instead is a new level of bullshitting with fancy words that impresses people with money and of course legislators who are about as clueless regarding computer technology. They think manipulating a URL to look at the image directory of a server is "hacking".

    Machine learning isn't all that complex and it sure isn't even new either. I agree with others here that this is just an ignorant journalism major spouting off buzz words.

    If you want to see a really nice GUI designed AI interface? Grab Scratch from MIT and then look at some of the AI experiments that have been done in that programming environment. They aren't necessarily all that fast and certainly some other programming environments would make them work more efficiently, but it isn't even all that new.

    Also.... the other shoe dropped when they got into the "app store" business model the developers of this "Cortex" programming environment started to explain what they were doing. It is a scam to separate you from money in your wallet where the author bought into the buzz words to make this seem like a cool thing.

    • +1 this stuff weakens the whole industry -- I just got through losing a huge debate in which a family member expressed frustration over what she perceived to be irresponsibility in the field of AI research. I felt that researchers were doing just that: creating proof of concept that demonstrates refinement of technique, while she felt that they had some responsibility for making sure the output was racially unbiased, safe, complete, accurate, etc. I agree that these are important goals, but I feel tha

Software production is assumed to be a line function, but it is run like a staff function. -- Paul Licker

Working...