Open Source

Ask Slashdot: How Can I Make My Own Vaporware Real? 126

Long-time Slashdot reader renuk007 is a retired Unix/Linux systems programmer with the ultimate question: After retiring I started a second career as a teacher -- and I'm loving it. My problem: I designed a (I feel) wonderful new language compiler, but implementing it will take me another ten years if I have to do it part-time.

Linus Torvalds was able to leverage the enthusiasm of the Internet to make Linux exist, but 1990 was a more innocent time. How does it work today? Any thoughts?

Or, to put it another way, how can you build a community to bring your ideas to light? Leave your best thoughts and suggestions in the comments. How can you make your own vaporware real?
Media

Ask Slashdot: How Do You Stream/Capture Video? 155

datavirtue writes: I am starting to look at capturing and streaming video, specifically video games in 4K at 60 frames per second. I have a Windows 10 box with a 6GB GTX 1060 GPU and a modern AMD octa-core CPU recording with Nvidia ShadowPlay. This works flawlessly, even in 4K at 60 fps. ShadowPlay produces MP4 files which play nice locally but seem to take a long time to upload to YouTube -- a 15-minute 4K 60fps video took almost three hours. Which tools are you fellow Slashdotters using to create, edit, and upload video in the most efficient manner?
Operating Systems

Ask Slashdot: Do You Miss Windows Phone? (theverge.com) 284

An anonymous reader writes: After recently switching on an old Windows Phone to create a silly April Fools' joke, The Verge's Tom Warren discovered just how much he missed Microsoft's mobile OS. Two of the biggest features that are hard to find/replicate on iOS and Android are the Metro design and Live Tiles. "Android and iOS still don't have system-wide dark modes, nearly 8 years after Windows Phone first introduced it," notes Warren. "Live Tiles were one of Windows Phone's most unique features. They enabled apps to show information on the home screen, similar to the widgets found on Android and iOS. You could almost pin anything useful to the home screen, and Live Tiles animated beautifully to flip over and provide tiny nuggets of information that made your phone feel far more personal and alive."

Some other neat features include the software keyboard, which Warren argues "is still far better than the defaults on iOS and Android," especially with the recently-added tracing feature that lets you swipe to write words. "Microsoft also experimented with features that were different to other mobile platforms, and some of the concepts still haven't really made their way to iOS or Android: Kid's Corner; Dedicated search button; Browser address bar; People hub; Unified messaging..." Aside from the competition aspect with Google and Apple, do you miss Windows Phone? What are some specific features you miss about the old mobile operating system?

IT

Ask Slashdot: Are Companies Under-Investing in IT? 325

Long-time Slashdot reader johnpagenola writes: In the middle 1970's I had to choose between focusing on programming or accounting. I chose accounting because organizations were willing to pay for good accounting but not for good IT.

Forty years later the situation does not appear to have changed. Target, Equifax, ransomware, etc. show pathetically bad IT design and operation. Why does this pattern of underinvestment in and under-appreciation of IT continue?

Long-time Slashdot reader dheltzel argues that the problem is actually bad hiring practices, which over time leads to lower-quality employees. But it seems like Slashdot's readership should have their own perspective on the current state of the modern workplace.

So share your own thoughts and experiences in the comments. Are companies under-investing in IT?
Programming

Ask Slashdot: Should Coding Exams Be Given on Paper? 273

Slashdot reader Qbertino is pursuing a comp sci degree -- and got a surprise during the last exam: being asked to write code on paper. Not that I'd expect an IDE -- it's an exam after all -- but being able to use a screen and a keyboard with a very simple editor should be standard at universities these days... I find this patently absurd in 2018...

What do you think and what are your recent experiences with exams at universities? Is this still standard? What's the point besides annoying students? Did I miss something?

A similar question was asked on Slashdot 16 years ago -- but apparently nothing has changed since 2002.

Leave your best answers in the comments. Should coding exams be given on paper?
Google

Ask Slashdot: What Does Your Data Mean To Google? (google.com) 88

shanen writes: Due to the recent kerfuffles, I decided to try again to see what Google had on me. This time I succeeded and failed, in contrast to the previous pure failures. Yes, I did find Google's takeout website and downloaded all of "my data," but no, it means nothing to me. Here are a few sub-questions I couldn't answer:

1. Much more data than I ever created, so where did the rest come from?
2. How does the data relate to the characteristic vector that Google uses to characterize me?
3. What tools do Googlers use to make sense of the data?

Lots more questions, but those are the ones that are most bugging me right now. Question 2. is probably heaviest among them, since I've read that the vector has 700 dimensions... So do you have any answers? Or better questions? Or your own takeout experiences to share? Oh yeah, one more thing. Based on my own troubled experience with the download process, it is clear that Google doesn't really want us to download the so-called "our own" data. My Question 4. is now: "What is Google hiding about me from me?"

DRM

Ask Slashdot: What Would Happen If Everything On the Internet Was DRM Protected? 190

dryriver writes: The whole Digital Rights Management (DRM) train started with music and films, spread horribly to computer and console games (Steam, Origin), turned a lot of computer software you could once buy-and-use into DRM-locked Software As A Service or Cloud Computing products (Adobe, Autodesk, MS Office 365 for example) that are impossible to use without an active Internet connection and account registration on a cloud service somewhere. Recently the World Wide Web Consortium (W3C) appears to have paved the way for DRM to find its way into the world of Internet content in various forms as well. Here's the question: What would happen to the Internet as we know it if just about everything on a website -- text, images, audio, video, scripts, games, PDF documents, downloadable files and data, you name it -- had DRM protection and DRM usage-limitations hooked into it by default?

Imagine trying to save a JPEG image you see on a website to your harddisk, and not only does every single one of your web browsers refuse the request, but your OS's screen-capture function won't let you take a snapshot of that JPEG image either. Imagine trying to copy-and-paste some text from a news article somewhere into a Slashdot submission box, and having browser DRM tell you 'Sorry! The author, copyright holder or publisher of this text does not allow it to be quoted or re-published anywhere other than where it was originally published!'. And then there is the (micro-)payments aspect of DRM. What if the DRM-fest that the future Internet may become 5 to 10 years from now requires you to make payments to a copyright holder for quoting, excerpting or re-publishing anything of theirs on your own webpage? Lets say for example that you found some cool behind-the-scenes-video of how Spiderman 8 was filmed, and you want to put that on your Internet blog. Except that this video is DRM'd, and requires you to pay 0.1 Cent each time someone watches the video on your blog. Or you want to use a short excerpt from a new scifi book on your blog, and the same thing happens -- you need to pay to re-publish even 4 paragraphs of the book. What then?
Graphics

Ask Slashdot: Should CPU, GPU Name-Numbering Indicate Real World Performance? 184

dryriver writes: Anyone who has built a PC in recent years knows how confusing the letters and numbers that trail modern CPU and GPU names can be because they do not necessarily tell you how fast one electronic part is compared to another electronic part. A Zoomdaahl Core C-5 7780 is not necessarily faster than a Boomberg ElectronRipper V-6 6220 -- the number at the end, unlike a GFLOPS or TFLOPS number for example, tells you very little about the real-world performance of the part. It is not easy to create one unified, standardized performance benchmark that could change this. One part may be great for 3D gaming, a competing part may smoke the first part in a database server application, and a third part may compress 4K HEVC video 11% faster. So creating something like, say, a Standardized Real-World Application Performance Score (SRWAPS) and putting that score next to the part name, letters, or series number will probably never happen. A lot of competing companies would have to agree to a particular type of benchmark, make sure all benchmarking is done fairly and accurately, and so on and so forth.

But how are the average consumers just trying to buy the right home laptop or gaming PC for their kids supposed to cope with the "letters and numbers salad" that follows CPU, GPU and other computer part names? If you are computer literate, you can dive right into the different performance benchmarks for a certain part on a typical tech site that benchmarks parts. But what if you are "Computer Buyer Joe" or "Jane Average" and you just want to glean quickly which two products -- two budget priced laptops listed on Amazon.com for example -- have the better performance overall? Is there no way to create some kind of rough numeric indicator of real-world performance and put it into a product's specs for quick comparison?
Programming

Ask Slashdot: Are 'Full Stack' Developers a Thing? 371

"It seems that nearly every job posting for a software developer these days requires someone who can do it all," complains Slashdot reader datavirtue, noting a main focus on finding someone to do "front end work and back end work and database work and message queue work...." I have been in a relatively small shop that for years that has always had a few guys focused on the UI. The rest of us might have to do something on the front-end but are mostly engaged in more complex "back-end" development or MQ and database architecture. I have been keeping my eye on the market, and the laser focus on full stack developers is a real turn-off.

When was the last time you had an outage because the UI didn't work right? I can't count the number of outages resulting from inexperienced developers introducing a bug in the business logic or middle tier. Am I correct in assuming that the shops that are always looking for full stack developers just aren't grown up yet?

sjames (Slashdot reader #1,099) responded that "They are a thing, but in order to have comprehensive experience in everything involved, the developer will almost certainly be older than HR departments in 'the valley' like to hire."

And Dave Ostrander argues that "In the last 10 years front end software development has gotten really complex. Gulp, Grunt, Sass, 35+ different mobile device screen sizes and 15 major browsers to code for, has made the front end skillset very valuable." The original submitter argues that front-end development "is a much simpler domain," leading to its own discussion.

Share your own thoughts in the comments. Are "full-stack" developers a thing?
Privacy

Ask Slashdot: Why Are There No True Dual-System Laptops Or Tablet Computers? 378

dryriver writes: This is not a question about dual-booting OSs -- having 2 or more different OSs installed on the same machine. Rather, imagine that I'm a business person or product engineer or management consultant with a Windows 10 laptop that has confidential client emails, word documents, financial spreadsheets, product CAD files or similar on it. Business stuff that needs to stay confidential per my employment contract or NDAs or any other agreement I may have signed. When I have to access the internet from an untrusted internet access point that somebody else controls -- free WiFi in a restaurant, cafe or airport lounge in a foreign country for example -- I do not want my main Win 10 OS, Intel/AMD laptop hardware or other software exposed to this untrusted internet connection at all. Rather, I want to use a 2nd and completely separate System On Chip or SOC inside my Laptop running Linux or Android to do my internet accessing. In other words, I want to be able to switch to a small 2nd standalone Android/Linux computer inside my Windows 10 laptop, so that I can do my emailing and internet browsing just about anywhere without any worries at all, because in that mode, only the small SOC hardware and its RAM is exposed to the internet, not any of the rest of my laptop or tablet. A hardware switch on the laptop casing would let me turn the 2nd SOC computer on when I need to use it, and it would take over the screen, trackpad and keyboard when used. But the SOC computer would have no physical connection at all to my main OS, BIOS, CPU, RAM, SSD, USB ports and so on. Does something like this exist at all (if so, I've never seen it...)? And if not, isn't this a major oversight? Wouldn't it be worth sticking a 200 Dollar Android or Linux SOC computer into a laptop computer if that enables you access internet anywhere, without any worries that your main OS and hardware can be compromised by 3rd parties while you do this?
Graphics

Ask Slashdot: How Did Real-Time Ray Tracing Become Possible With Today's Technology? 145

dryriver writes: There are occasions where multiple big tech manufacturers all announce the exact same innovation at the same time -- e.g. 4K UHD TVs. Everybody in broadcasting and audiovisual content creation knew that 4K/8K UHD and high dynamic range (HDR) were coming years in advance, and that all the big TV and screen manufacturers were preparing 4K UHD HDR product lines because FHD was beginning to bore consumers. It came as no surprise when everybody had a 4K UHD product announcement and demo ready at the same time. Something very unusual happened this year at GDC 2018 however. Multiple graphics and GPU companies, like Microsoft, Nvidia, and AMD, as well as other game developers and game engine makers, all announced that real-time ray tracing is coming to their mass-market products, and by extension, to computer games, VR content and other realtime 3D applications.

Why is this odd? Because for many years any mention of 30+ FPS real-time ray tracing was thought to be utterly impossible with today's hardware technology. It was deemed far too computationally intensive for today's GPU technology and far too expensive for anything mass market. Gamers weren't screaming for the technology. Technologists didn't think it was doable at this point in time. Raster 3D graphics -- what we have in DirectX, OpenGL and game consoles today -- was very, very profitable and could easily have evolved further the way it has for another 7 to 8 years. And suddenly there it was: everybody announced at the same time that real-time ray tracing is not only technically possible, but also coming to your home gaming PC much sooner than anybody thought. Working tech demos were shown. What happened? How did real-time ray tracing, which only a few 3D graphics nerds and researchers in the field talked about until recently, suddenly become so technically possible, economically feasible, and so guaranteed-to-be-profitable that everybody announced this year that they are doing it?
Open Source

Ask Slashdot: Can FOSS Help In the Fight Against Climate Change? 154

dryriver writes: Before I ask my question, there already is free and open-source software (FOSS) for wind turbine design and simulation called QBlade. It lets you calculate turbine blade performance using nothing more than a computer and appears compatible with Xfoil as well. But consider this: the ultimate, most efficient and most real-world usable and widely deployable wind turbine rotor may not have traditional "blades" or "foils" at all, but may be a non-propeller-like, complex and possibly rather strange looking three-dimensional rotor of the sort that only a 3D printer could prototype easily. It may be on a vertical or horizontal axis. It may have air flowing through canals in its non-traditional structure, rather than just around it. Nobody really knows what this "ultimate wind turbine rotor" may look like.

The easiest way to find such a rotor might be through machine-learning. You get an algorithm to create complex non-traditional 3D rotor shapes, simulate their behavior in wind, and then mutate the design, simulate again, and get a machine learning algorithm to learn what sort of mutations lead to a better performing 3D rotor. In theory, enough iterations -- perhaps millions or more -- should eventually lead to the "ultimate rotor" or something closer to it than what is used in wind turbines today. Is this something FOSS developers could tackle, or is this task too complex for non-commercial software? The real world impact of such a FOSS project could be that far better wind turbines can be designed, manufactured and deployed than currently exist, and the fight against climate change becomes more effective; the better your wind turbines perform, and the more usable they are, the more of a fighting chance humanity has to do something against climate change. Could FOSS achieve this?
Facebook

Ask Slashdot: Is There a Good Alternative to Facebook? (washingtonpost.com) 490

Long-time Slashdot reader Lauren Weinstein argues that fixing Facebook may be impossible because "Facebook's entire ecosystem is predicated on encouraging the manipulation of its users by third parties who posses the skills and financial resources to leverage Facebook's model. These are not aberrations at Facebook -- they are exactly how Facebook was designed to operate." Meanwhile one fund manager is already predicting that sooner or later every social media platform "is going to become MySpace," adding that "Nobody young uses Facebook," and that the backlash over Cambridge Analytica "quickens the demise."

But Slashdot reader silvergeek asks, "is there a safe, secure, and ethical alternative?" to which tepples suggests "the so-called IndieWeb stack using the h-entry microformat." He also suggests Diaspora, with an anonymous Diaspora user adding that "My family uses a server I put up to trade photos and posts... Ultimately more people need to start hosting family servers to help us get off the cloud craze... NethServer is a pretty decent CentOS based option."

Meanwhile Slashdot user Locke2005 shared a Washington Post profile of Mastodon, "a Twitter-like social network that has had a massive spike in sign-ups this week." Mastodon's code is open-source, meaning anybody can inspect its design. It's distributed, meaning that it doesn't run in some data center controlled by corporate executives but instead is run by its own users who set up independent servers. And its development costs are paid for by online donations, rather than through the marketing of users' personal information... Rooted in the idea that it doesn't benefit consumers to depend on centralized commercial platforms sucking up users' personal information, these entrepreneurs believe they can restore a bit of the magic from the Internet's earlier days -- back when everything was open and interoperable, not siloed and commercialized.
The article also interviews the founders of Blockstack, a blockchain-based marketplace for apps where all user data remains local and encrypted. "There's no company in the middle that's hosting all the data," they tell the Post. "We're going back to the world where it's like the old-school Microsoft Word -- where your interactions are yours, they're local and nobody's tracking them." On Medium, Mastodon founder Eugene Rochko also acknowledges Scuttlebutt and Hubzilla, ending his post with a message to all social media users: "To make an impact, we must act."

Lauren Weinstein believes Google has already created an alternative to Facebook's "sick ecosystem": Google Plus. "There are no ads on Google+. Nobody can buy their way into your feed or pay Google for priority. Google doesn't micromanage what you see. Google doesn't sell your personal information to any third parties..." And most importantly, "There's much less of an emphasis on hanging around with those high school nitwits whom you despised anyway, and much more a focus on meeting new persons from around the world for intelligent discussions... G+ posts more typically are about 'us' -- and tend to be far more interesting as a result." (Even Linus Torvalds is already reviewing gadgets there.)

Wired has also compiled their own list of alternatives to every Facebook service. But what are Slashdot's readers doing for their social media fix? Leave your own thoughts and suggestions in the comments.

Is there a good alternative to Facebook?
Sci-Fi

Ask Slashdot: Is Beaming Down In Star Trek a Death Sentence? 593

Artem Tashkinov writes: Some time ago, Ars Technica ran a monumental article on beaming of consciousness in Star Trek and its implications, and more importantly, whether it's plausible to achieve that without killing a person in the process.

It seems possible in the Star Trek universe. However, currently physicists find the idea absurd and unreal because there's no way you can transport matter and its quantum state without first destroying it and then recreating it perfectly, due to Heisenberg's Uncertainty Principle. The biggest conundrum of all is the fact that pretty much everyone understands that consciousness is a physical state of the brain, which features continuity as its primary principle; yet it surely seems like copying the said state produces a new person altogether, which brings up the problem of consciousness becoming local to one's skull and inseparable from gray matter. This idea sounds a bit unscientific because it introduces the notion that there's something about our brain which cannot be described in terms of physics, almost like soul.

This also brings another very difficult question: how do we know if we are the same person when we wake up in the morning or after we were put under during general anesthesia? What are your thoughts on the topic?
Businesses

Ask Slashdot: Were Developments In Technology More Exciting 30 Years Ago? 231

dryriver writes: We live in a time where mainstream media, websites, blogs, social media accounts, your barely computer literate next door neighbor and so forth frequently rave about the "innovation" that is happening everywhere. But as someone who experienced developments in technology back in the 1980s and 1990s, in computing in particular, I cannot shake the feeling that, somehow, the "deep nerds" who were innovating back then did it better and with more heartfelt passion than I can feel today. Of course, tech from 30 years ago seems a bit primitive compared to today -- computer gear is faster and sleeker nowadays. But it seems that the core techniques and core concepts used in much of what is called "innovation" today were invented for the first time one-after-the-other back then, and going back as far as the 1950s maybe. I get the impression that much of what makes billions in profits today and wows everyone is mere improvements on what was actually invented and trail blazed for the first time, 2, 3, 4, 5 or more decades ago. Is there much genuine "inventing" and "innovating" going on today, or are tech companies essentially repackaging the R&D and knowhow that was brought into the world decades ago by long-forgotten deep nerds into sleeker, sexier 21st century tech gadgets? Is Alexa, Siri, the Xbox, Oculus Rift or iPhone truly what could be considered "amazing technology," or should we have bigger and badder tech and innovation in the year 2018?

Slashdot Top Deals