AI

'Memtransistor' Brings World Closer To Brain-Like Computing 94

the gmr writes: According to a recent article published in the journal Nature, researchers at Northwestern University's McCormick School of Engineering have developed a "memtransistor," a device that both stores information in memory and processes information. The combined transistor and memory resistor work more like a neuron and purports to make computing more brain-like. The new "memtransistor" would use less energy than digital computers and eliminate the need to run memory and processing as separate functions while also being more brain-like. Lead researcher Mark C. Hersam clarified the brain-like efficacy of the memtransistor: "...in the brain, we don't usually have one neuron connected to only one other neuron. Instead, one neuron is connected to multiple other neurons to form a network. Our device structure allows multiple contacts, which is similar to the multiple synapses in neurons... [but] making dozens of devices, as we have done in our paper, is different than making a billion, which is done with conventional transistor technology today." Hersam reported no barriers to scaling up to billions of devices. This new technology would make smart devices more capable and possibly more seemingly-human. The devices may also promote advances in neural networks and brain-computer interfaces, new technologies also recently reported at Futurism.
Education

Learning To Program Is Getting Harder (slashdot.org) 408

theodp writes: While Google suggests that parents and educators are to blame for why kids can't code, Allen Downey, Professor at Olin College argues that learning to program is getting harder . Downey writes: The fundamental problem is that the barrier between using a computer and programming a computer is getting higher. When I got a Commodore 64 (in 1982, I think) this barrier was non-existent. When you turned on the computer, it loaded and ran a software development environment (SDE). In order to do anything, you had to type at least one line of code, even if all it did was another program (like Archon). Since then, three changes have made it incrementally harder for users to become programmers:
1. Computer retailers stopped installing development environments by default. As a result, anyone learning to program has to start by installing an SDE -- and that's a bigger barrier than you might expect. Many users have never installed anything, don't know how to, or might not be allowed to. Installing software is easier now than it used to be, but it is still error prone and can be frustrating. If someone just wants to learn to program, they shouldn't have to learn system administration first.
2. User interfaces shifted from command-line interfaces (CLIs) to graphical user interfaces (GUIs). GUIs are generally easier to use, but they hide information from users about what's really happening. When users really don't need to know, hiding information can be a good thing. The problem is that GUIs hide a lot of information programmers need to know. So when a user decides to become a programmer, they are suddenly confronted with all the information that's been hidden from them. If someone just wants to learn to program, they shouldn't have to learn operating system concepts first.
3. Cloud computing has taken information hiding to a whole new level. People using web applications often have only a vague idea of where their data is stored and what applications they can use to access it. Many users, especially on mobile devices, don't distinguish between operating systems, applications, web browsers, and web applications. When they upload and download data, they are often confused about where is it coming from and where it is going. When they install something, they are often confused about what is being installed where. For someone who grew up with a Commodore 64, learning to program was hard enough. For someone growing up with a cloud-connected mobile device, it is much harder.
theodp continues: So, with the Feds budgeting $200 million a year for K-12 CS at the behest of U.S. tech leaders, can't the tech giants at least put a BASIC on every phone/tablet/laptop for kids?
Transportation

Distracted Driving: Everyone Hates It, But Most of Us Do It, Study Finds 140

An anonymous reader quotes a report from Ars Technica: Insurance company Esurance has a new study out on distracted driving, and it makes for interesting reading. Almost everyone agrees distracted driving is bad, yet it's still remarkably prevalent. Even drivers who report rarely driving distracted also report that they engage in distracting behaviors. The study also raises some questions about the growing complexity of modern vehicles, particularly the user interfaces they confront us with. The Esurance report includes survey data from more than a thousand participants. More than 90 percent said that browsing for apps, texting, and emailing were distracting. Yet more than half of daily commuters admitted to doing it. The survey also found that the longer your commute, the greater the chance is you'll get distracted, probably by your phone. Even participants who reported they were "rarely distracted" admitted to distracting behavior like talking on the phone or even viewing GPS Navigation data. (Any task performed while driving should be able to be performed in under two seconds to avoid becoming a distraction.)
Bug

Apple is Postponing Release of New Features To iOS This Year To Focus on Reliability and Performance: Report (axios.com) 106

For a change, Apple plans to not push new features to iOS devices this year so that it could focus on reliability and quality of the software instead, Axios reported on Tuesday. From the report: Apple has been criticized of late, both for security issues and for a number of quality issues, as well as for how it handles battery issues on older devices. Software head Craig Federighi announced the revised plan to employees at a meeting earlier this month, shortly before he and some top lieutenants headed to a company offsite. Pushed into 2019 are a number of features including a refresh of the home screen and in-car user interfaces, improvements to core apps like mail and updates to the picture-taking, photo editing and sharing experiences.
Privacy

2 Years Later, Security Holes Linger In GPS Services Used By Millions of Devices (securityledger.com) 12

chicksdaddy quotes a report from The Security Ledger: Security researchers say that serious security vulnerabilities linger in GPS software by the China-based firm ThinkRace more than two years after the hole was discovered and reported to the firm, The Security Ledger reports. Data including a GPS enabled device's location, serial number, assigned phone number and model and type of device can be accessed by any user with access to the GPS service. In some cases, other information is available including the device's location history going back 1 week. In some cases, malicious actors could also send commands to the device via SMS including those used to activate or deactivate GEO fencing alarms features, such as those used on child-tracking devices.

The vulnerabilities affect hundreds of thousands of connected devices that use the GPS services, from smart watches, to vehicle GPS trackers, fitness trackers, pet trackers and more. At issue are security holes in back-end GPS tracking services that go by names like amber360.com, kiddo-track.com, carzongps.com and tourrun.net, according to Michael Gruhn, an independent security researcher who noted the insecure behavior in a location tracker he acquired and has helped raise awareness of the widespread flaws. Working with researcher Vangelis Stykas, Gruhn discovered scores of seemingly identical GPS services, many of which have little security, allowing low-skill hackers to directly access data on GPS tracking devices.

Alas, news about the security holes is not new. In fact, the security holes in ThinkRace's GPS services are identical to those discovered by New Zealand researcher Lachlan Temple in 2015 and publicly disclosed at the time. Temple's research focused on one type of device: a portable GPS tracker that plugged into a vehicle's On Board Diagnostic (or OBD) port. However, Stykas and Gruhn say that they have discovered the same holes spread across a much wider range of APIs (application program interfaces) and services linked to ThinkRace.

Operating Systems

Apple To Release Lisa OS For Free As Open Source In 2018 (iphoneincanada.ca) 95

New submitter Jose Deras writes: Nearly 35 years ago, Apple released its first computer with a graphical user interface, called the Lisa. Starting next year, the Computer History Museum will release the Apple Lisa OS for free as an open-source project. According to a new report from Business Insider, the Computer History Museum will release the code behind the Apple Lisa operating system for free as open source, for anyone to try and tinker with. The news was announced via the LisaList mailing list for Lisa enthusiasts.

"While Steve Jobs didn't create the Lisa, he was instrumental in its development. It was Jobs who convinced the legendary Xerox PARC lab to let the Apple Lisa team visit and play with its prototypes for graphical user interfaces," reads the report. "And while Apple at the time said that Lisa stood for 'Local Integrated System Architecture,' Jobs would later claim to biographer Walter Isaacson that the machine was actually named for his oldest daughter, Lisa Nicole Brennan-Jobs." "Then-Apple CEO John Sculley had Jobs removed from the Lisa project, which kicked off years-long animosity between the two," continues the report. "Ultimately, a boardroom brawl would result in Jobs quitting in a huff to start his own company, NeXT Computer. Apple would go on to buy NeXT in 1996, bringing Jobs back into the fold. By 1997, Jobs had become CEO of Apple, leading the company to its present status as the most valuable in the world."

Technology

That '70s Show: the Conference That Predicted the Future of Work (wired.com) 40

theodp writes: Over at Wired, Leslie Berlin writes about Futures Day at the 1977 Xerox World Conference, an invitation-only demonstration of the Alto personal computer system developed at Xerox PARC. It's an excerpt from Troublemakers: How a Generation of Silicon Valley Upstarts Invented the Future. Both Berlin's book and Brian Dear's recent The Friendly Orange Glow: The Untold Story of the PLATO System and the Dawn of Cyberculture are shedding light on groundbreaking systems of the '70s that were ultimately done in by the less-featured but low-cost Apple II (yes, $2,638 for a system with 48 kB of RAM was 'low cost'!) and other personal computers. Interestingly, Dear notes that the Xerox Parc and PLATO teams sent people out to see and learn and exchange ideas with each other over the years. Their interactions included 'tremendous battles' over the advantages and disadvantages of mouse interfaces [Xerox] vs. touch screens [PLATO], as well as plasma displays [PLATO] vs. other, cheaper display solutions [Xerox]. As is the case with many debates, both teams proved to be "right." Apple wouldn't introduce the masses to a mouse interface until 1984 [Macintosh] and a touch screen interface until 2007 [iPhone].
Input Devices

What Will Replace Computer Keyboards? (xconomy.com) 302

jeffengel writes:Computer keyboards will be phased out over the next 20 years, and we should think carefully about what replaces them as the dominant mode of communicating with machines, argues Android co-founder Rich Miner. Virtual reality technology and brain-computer links -- whose advocates include Elon Musk -- could lead to a "dystopian" future where people live their lives inside of goggles, or they jack directly into computers and become completely "de-personalized," Miner worries.

He takes a more "humanistic" view of the future of human-machine interfaces, one that frees us to be more expressive and requires computers to communicate on our level, not the other way around. That means software that can understand our speech, facial expressions, gestures, and handwriting. These technologies already exist, but have a lot of room for improvement.

One example he gives is holding up your hand to pause a video.
Input Devices

Meet The Next Major Operating System: Amazon's Alexa (zdnet.com) 168

ZDNet's editor-in-chief warns that Amazon has ambitious plans for its new Echo Plus: Amazon is making an explicit play to be the home hub because it can automatically discover and set up lights, locks, plugs, and switches without the need for additional hubs or apps. And the Alexa 'routines' feature will be able to tie all of this together by allowing you to automate a series of actions with a single voice command: saying "Alexa, good night," and having it turn off the lights, lock the door, and turn off the TV, for example. A platform that other apps and devices can connect into? This starts to sound a lot like an operating system for the home to me.

It's not just the home, either; Amazon announced a deal to make Alexa available in BMW and Mini vehicles from the middle of next year, allowing drivers to use the digital assistant to get directions, play music or control smart home devices while travelling, without having to use a separate app. Travellers will also have access to Alexa skills from third-party developers like Starbucks, allowing them to order their coffee while driving and thus skip the line. Back in January, Amazon and Ford said they were working together to allow voice commands to turn on the engine, lock or unlock the doors as well as play music and use other skills...

It's still early days but I think Alexa has a good shot at becoming one of the standard interfaces, certainly for consumers -- an operating system for the home, if not more, if the automotive tie-ups take off too. All of this will make Amazon a serious force to be reckoned with. Windows has the desktop, and Android and iOS can fight it out for the smartphone, but right now Alexa has a lock on the smart home.

Technology

What Comes After User-Friendly Design? (fastcodesign.com) 189

Kelsey Campbell-Dollaghan, writing for FastCoDesign: "User-friendly" was coined in the late 1970s, when software developers were first designing interfaces that amateurs could use. In those early days, a friendly machine might mean one you could use without having to code. Forty years later, technology is hyper-optimized to increase the amount of time you spend with it, to collect data about how you use it, and to adapt to engage you even more. [...] The discussion around privacy, security, and transparency underscores a broader transformation in the typical role of the designer, as Khoi Vinh, principal designer at Adobe and frequent design writer on his own site, Subtraction, points out. So what does it mean to be friendly to users-er, people-today? Do we need a new way to talk about design that isn't necessarily friendly, but respectful? I talked to a range of designers about how we got here, and what comes next.
Businesses

It's Official: Users Navigate Flat UI Designs 22 Percent Slower (theregister.co.uk) 408

Reader Zorro writes: The mania for "flat" user interfaces is costing publishers and e-commerce sites billions in lost revenue. A "flat" design removes the distinction between navigation controls and content. Historically, navigation controls such as buttons were shaded, or given 3D relief, to distinguish them from the application or web page's content. The mania is credited to Microsoft with its minimalistic Zune player, an iPod clone, which was developed into the Windows Phone Series UX, which in turn became the design for Windows from Windows 8 in 2012 onwards. But Steve Jobs is also to blame. The typography-besotted Apple founder was fascinated by WP's "magazine-style" Metro design, and it was posthumously incorporated into iOS7 in 2013. Once blessed by Apple, flat designs spread to electronic programme guides on telly, games consoles and even car interfaces. The consequence is that users find navigation harder, and so spend more time on a page. Now research by the Nielsen Norman Group has measured by how much. The company wired up 71 users, and gave them nine sites to use, tracking their eye movement and recording the time spent on content. On average participants spent 22 per cent more time (i.e. slower task performance) looking at the pages with weak signifiers," the firm notes. Why would that be? Users were looking for clues how to navigate. "The average number of fixations was significantly higher on the weak-signifier versions than the strong-signifier versions. On average, people had 25 per cent more fixations on the pages with weak signifiers."
Software

Why Are There So Many Knobs in Audio Software? (theoutline.com) 214

John Lagomarsino, writing for The Outline: Skeuomorphic design, where user interfaces emulate the appearance of physical objects, has been popular for pretty much the history of personal computing. The ideas of "files," "folders," and the "recycle bin" in Windows could be considered skeuomorphs, intended to help transition early computer users from analog to digital, as could the idea of an "inbox" and "outbox" in email and the paperclip that symbolizes attachments. More recently, a lot of early iOS apps were famous for their heavy-handed skeuomorphic elements, with felt textures and chunky drop shadows. But no area of computing has so thoroughly gone for it more than audio software. The first Billboard #1 single that was recorded to a hard drive instead of tape was "Livin' La Vida Loca" in 1999; 18 years later, in 2017, most audio software still looks like the designers attempted to replicate physical equipment piece for piece on a computer screen. Faders, switches, knobs, needles twitching between numbers on a volume meter -- they're all there. Except you have to control them with a mouse. Winamp may have been Patient Zero in this gaudy epidemic, but it has spread far and wide. I spend a lot of my time mixing and editing audio, and that often involves having multiple audio plugins (essentially applications that run inside the main audio program) from multiple vendors running simultaneously. But all audio software, for what I suppose are historical reasons, features the most egregious skeuomorphic design in all of software. Alone, each plugin is hideous in its own unique way. A panel of 3D knobs here, a pixelated oscilloscope there.
Communications

Engineers Discover How To Make Antennas For Wireless Communication 100x Smaller Than Their Current Size (sciencemag.org) 129

Engineers have figured out how to make antennas for wireless communication 100 times smaller than their current size, an advance that could lead to tiny brain implants, micro-medical devices, or phones you can wear on your finger. Science Magazine reports: The new mini-antennas play off the difference between electromagnetic (EM) waves, such as light and radio waves, and acoustic waves, such as sound and inaudible vibrations. EM waves are fluctuations in an electromagnetic field, and they travel at light speed -- an astounding 300,000,000 meters per second. Acoustic waves are the jiggling of matter, and they travel at the much slower speed of sound -- in a solid, typically a few thousand meters per second. So, at any given frequency, an EM wave has a much longer wavelength than an acoustic wave. Antennas receive information by resonating with EM waves, which they convert into electrical voltage. For such resonance to occur, a traditional antenna's length must roughly match the wavelength of the EM wave it receives, meaning that the antenna must be relatively big. However, like a guitar string, an antenna can also resonate with acoustic waves. The new antennas take advantage of this fact. They will pick up EM waves of a given frequency if its size matches the wavelength of the much shorter acoustic waves of the same frequency. That means that that for any given signal frequency, the antennas can be much smaller. The trick is, of course, to quickly turn the incoming EM waves into acoustic waves.

The team created two kinds of acoustic antennas. One has a circular membrane, which works for frequencies in the gigahertz range, including those for WiFi. The other has a rectangular membrane, suitable for megahertz frequencies used for TV and radio. Each is less than a millimeter across, and both can be manufactured together on a single chip. When researchers tested one of the antennas in a specially insulated room, they found that compared to a conventional ring antenna of the same size, it sent and received 2.5 gigahertz signals about 100,000 times more efficiently, they report in Nature Communications.

Power

Researchers Reveal Malware Designed To 'Power Down' Electric Grid (securityledger.com) 42

chicksdaddy writes: A sample of malicious software discovered at the site of a December, 2016 cyber attack on Ukraine's electrical grid is a previously unknown program that could be capable of causing physical damage to the electrical grid, according to reports by two security firms. The Security Ledger reports: "Experts at the firm ESET and Dragos Security said on Monday that the malicious software, dubbed CrashOverride (Dragos) or Industroyer (ESET) affected a 'single transmission level substation' in the Ukraine attack on December 17th, 2016 in what appears to have been a test run. Still, experts said that features in the malware show that adversaries are automating and standardizing what were previously manual attacks against critical infrastructure, while also adding features that could be used to physically disable or damage critical systems -- the first evidence of such activity since the identification of the Stuxnet malware in 2010. The Crash Override malware 'took an approach to understand and codify the knowledge of the industrial process to disrupt operations as STUXNET (sp) did,' wrote Dragos Security in a report. The malware improves on features seen in other malicious software that it knows to target industrial control systems. Specifically, the malware makes use of and manipulates industrial control system-specific communications protocols. That's similar to features in ICS malware known as Havex that targeted grid operators in Europe and the United States in 2014. The Crash Override malware also targeted the libraries and configuration files of so-called 'Human Machine Interfaces' (or HMIs) to understand the environment they have infected. It can use HMIs, which provide a graphical interface for managing industrial control system equipment, to connect spread to other Internet connected equipment and systems, Dragos said."
AI

Startup Uses AI To Create Programs From Simple Screenshots (siliconangle.com) 89

An anonymous reader shares an article: A new neural network being built by a Danish startup called UIzard Technologies IVS has created an application that can transform raw designs of graphical user interfaces into actual source code that can be used to build them. Company founder Tony Beltramelli has just published a research paper that reveals how it has achieved that. It uses cutting-edge machine learning technologies to create a neural network that can generate code automatically when it's fed with screenshots of a GUI. The Pix2Code model actually outperforms many human coders because it can create code for three separate platforms, including Android, iOS and "web-based technologies," whereas many programmers are only able to do so for one platform. Pix2Code can create GUIs from screenshots with an accuracy of 77 percent, but that will improve as the algorithm learns more, the founder said.
The Media

Walt Mossberg's Last Column Calls For Privacy and Security Laws (recode.net) 96

70-year-old Walt Mossberg wrote his last weekly column Thursday, looking back on how "we've all had a hell of a ride for the last few decades" and revisiting his famous 1991 pronouncement that "Personal computers are just too hard to use, and it isn't your fault." Not only were the interfaces confusing, but most tech products demanded frequent tweaking and fixing of a type that required more technical skill than most people had, or cared to acquire. The whole field was new, and engineers weren't designing products for normal people who had other talents and interests. But, over time, the products have gotten more reliable and easier to use, and the users more sophisticated... So, now, I'd say: "Personal technology is usually pretty easy to use, and, if it's not, it's not your fault." The devices we've come to rely on, like PCs and phones, aren't new anymore. They're refined, built with regular users in mind, and they get better each year. Anything really new is still too close to the engineers to be simple or reliable.
He argues we're now in a strange lull before entering an unrecognizable world where major new breakthroughs in areas like A.I., robotics, smart homes, and augmented reality lead to "ambient computing", where technology itself fades into the background. And he uses his final weekly column to warn that "if we are really going to turn over our homes, our cars, our health and more to private tech companies, on a scale never imagined, we need much, much stronger standards for security and privacy than now exist. Especially in the U.S., it's time to stop dancing around the privacy and security issues and pass real, binding laws."
Microsoft

Microsoft Is Surprisingly Comfortable With Its New Place In a Mobile, Apple, and Android World (fastcompany.com) 73

An anonymous reader writes: The company that once held a mock funeral for the iPhone -- complete with dedicated "iPhone trashcans" -- now has a very different attitude about the company of Jobs. The Microsoft whose old CEO Steve Ballmer in 2007 famously predicted the iPhone had "no chance; no chance at all" of getting market share, now readily accepts and embraces a world where the iPhone and Android dominate personal computing. Microsoft talked a lot here at its Build 2017 developer conference about extending Windows experiences over to iOS and Android devices. And it's not just about fortifying Windows. Microsoft says it not only wants to connect with those foreign operating systems, but by bringing over functionality from Windows 10 (along with content) it hopes to "make those other devices better," as one Microsoft rep said in a press briefing yesterday. The developers here at Build cheered when Microsoft announced XAML Standard 1.0, which provides a single markup language to make user interfaces that work on Windows, iOS, and Android. In one demo, the company demonstrated how an enterprise sales app could be extended to an iOS device so someone could continue capturing a potential client's data on a mobile device. Windows not only sent over the client data that had already been captured, but also the business-app shell that had captured it.
Operating Systems

New Windows Look and Feel, Neon, Is Officially the 'Microsoft Fluent Design System' (arstechnica.com) 95

An anonymous reader quotes a report from Ars Technica: Earlier this year, pictures of a new Windows look and feel leaked. Codenamed Project Neon, the new look builds on Microsoft Design Language 2 (MDL2), the styling currently used in Windows 10, to add elements of translucency and animation. Neon has now been officially announced, and it has an official new name: the Microsoft Fluent Design System. The switch from "design language" to "design system" is deliberate; Fluent is intended to define more than just the appearance, but also the interactivity. Though visually there are common elements, the system is designed to work across virtual/augmented reality, phones, tablets, desktop PCs, games consoles, using mice, keyboards, motion controllers, voice, gestures, touch, and pen, with the interactivity and input optimized to each particular form factor. Fluent is described as having five "fundamentals": light, depth, motion, material, and scale. "Light" means that the interface should avoid distracting and strive to ensure that attention is drawn to where it needs to be. With "depth," Fluent apps will make greater use of layering and the relationships between objects and interface elements. Fluent will use "motion" to indicate relationships and connections between elements, establishing context. Microsoft is using "Material" to mean making best use of the screen space and giving room to content. "Scale" means building interfaces that can go beyond two dimensions, and go beyond the size of a screen, to embrace new form factors and input methods as they arrive.
AI

'This Isn't AI' (shkspr.mobi) 138

The Amazon Echo, a 'smart' speaker developed by Amazon.com, gets many things right. You can ask it to for weather updates, check news, and to play music, and Alexa, the AI powering the device, won't disappoint. But how smart is Alexa? Programmer Terence Eden put it to a simple test to find out. From a blog post: I can now query my solar panels via my Alexa Amazon Dot Echo. I flatter myself as a reasonably competent techie and programmer, but fuck me AWS Lambdas and Alexa skills are a right pile of shite! I wanted something simple. When I say "Solar Panels", call this API, then say this phrase. That's the kind of thing which should take 5 minutes in something like IFTTT . Instead, it took around two hours of following out-of-date official tutorials, and whinging on Twitter, before I got my basic service up and running. [...] It's not so bad, but it does reveal Amazon's contempt for developers. Several of the steps contained errors, it involves multiple logins, random clicks, and a bunch of copy & pasting. Dull and complex. A frustrating and ultimately unsatisfying experience. I ended up using StackOverflow to correct errors in my code because the documentation was so woefully lacking. I kinda thought that Amazon would hear "solar panels" and work out the rest of the query using fancy neural network magic. Nothing could be further from the truth. The developer has to manually code every single possible permutation of the phrase that they expect to hear. This isn't AI. Voice interfaces are the command line. But you don't get tab-to-complete. Amazon allows you to test your code by typing rather than speaking. I spent a frustrating 10 minutes trying to work out why my example code didn't work. Want to know why? I was typing "favourite" rather than the American spelling. Big Data my shiny metal arse.

Slashdot Top Deals