Link to Original Source
Link to Original Source
How Will You Replace Google Reader?
(Disclaimer: I'm going to use the term 'bandwidth' universally instead of the more correct 'latency' or 'throughput' so normal people can hopefully understand this post) The biggest problem I have with every alternative I have tried is that they are built with the most annoying design flaws. They are so painful to me that I am certain these flaws will be look back upon as the geocities of our modern day web development.
When I fire up an alternative, the responsiveness that was in Google Reader just isn't there. And it always seems like the alternatives require you to hit "refresh" on their interface and then what happens? It apparently makes a call out to every single RSS feed to get updates. On the surface this may seem like standard HTTP way of thinking about things. But it makes for a shit user experience. I have thousands of RSS feeds. Thousands. And if I hit refresh in this paradigm, my browser makes 1,000+ HTTP GET requests. It's not a lot of data but if even one of those requests is slow, it's usually blocking on ceding control back to me.
So let's iterate improvements on here that will get us back to Google Reader style responsiveness, shall we? Well, one of the simplest improvements I can see is to do these requests asynchronously with nonblocking web workers. You can attach each of them to the div or construct that each feed is displayed in and only have them work when that feed is visible (for instance if I am collapsing/expanding folders of feeds). You can grey out the feed until the request comes back but if another request returns first, it is parsed and inserted and activated to my vision. That way if cnn.com comes back faster than NASA's Photograph of the Day, I can read while waiting for my images.
On the downside, this greatly complicates the server side. You need to have one be-all end-all "cache" or storage of all incoming feeds that any user is subscribed to. And for each of these feeds, you need to have a list of the users subscribed to it. And now your server will need to maintain the HTTP GET requests to cnn.com and NASA in order to get updates. When it gets an update, there's two ways you could handle it (user queues are complicated so I won't suggest that) but the most basic way is to send it right out to everyone on that subscription list who has an active WebSocket session established with their account. If a new WebSocket session is established, they simply get the last N stories from their subscriptions (Google included pagination backwards binned by time). To alleviate even more bandwidth from you, you could store it on the client side with HTML5 Web Storage and then the first thing the Web Socket does is find the last date on the last stored element and send that across to the server to establish the session. The server responds with any updates past that time. And from there your WebSocket is merely listening and inserting elements into the page when they arrive.
Of course, after you valiantly save your RSS providers from death by a thousand cuts, you yourself face that same fate. And now you know why Google scheduled a turn off of Reader
It's time for people to stop with this pretending to write computer games nonsense.
Please re-read those sentences until you get it through your head. If you want to write a computer game, start with a real programming language:
1. C 2. C++
I recommend C so you don't get distracted by all the horse**** theory around OOP.
If you need a graphics engine, fine. Get one. Then code the game in a real development environment on a real computer (not a fiddly mobile device) on the metal. It will be hard, but the results will be worth it. If you can't bring yourself to do this, then you have no business programming or writing computer games.
That is all.
It's time for people to stop with this pretending to write computer games nonsense.
C/C++ is not a suitable development environment. VI is not a suitable development environment. Emacs is not a suitable development environment. C++14 is vaporware. C++ is designed to focus on OBJECTS, not GAMES.
Please re-read those sentences until you get it through your head. If you want to write a computer game, start with a real instruction set:
1. ARM (ARMv7 not ARMv8)
I recommend ARM so you don't get distracted by all the horse**** bloat around CISC.
If you need a punch card reader, fine. Get one. Then code the game on a real ENIAC (not a fiddly PC) in the vacuum tubes. It will be hard, but the results will be worth it. If you can't bring yourself to do this, then you have no business programming or writing computer games.
P.S. I'm not interested in C++ OOP or whatever "Stroustrup's latest fly-by-night language" is, and I've been programming computers for 1,337 years, so I'm not interested in your tech credentials either. Either code the game or don't code the game, but knock it off with this artificial C/C++ development nonsense.
That is all.
author David Geary
pages 723 pages
publisher Prentice Hall
summary An introduction to game development in HTML5’s canvas that brings the developer all the way up to graphics, animation and basic game development.
About a year ago, I started a hobby project to develop a framework for playing cards in the browser on all platforms. The canvas element would be the obvious tool of choice for accomplishing this goal. Unfortunately I began development using a very HTML4 attitude with (what I now recognize) was laughable resource management. This book really helped me further along in getting that hobby project to a more useable state.
The first chapter of the book introduces the reader to the basics of HTML5 and the canvas element. The author covers things like using clientX and clientY for mouse events instead of x and y. A simple clock is built and shows how to correctly use the basic drawing parts of the HTML5 specification. For readers unfamiliar with graphics applications, a lot of ground is covered on how you programmatically start by constructing an invisible path that will not be visually rendered until stroke() or fill() is called. The chapter also covers the basic event/listener paradigm employed by almost anything accepting user input. Geary explains how to properly save and restore the surface instead of trying to graphically undo what was just done.
An important theme through this book is how to use HTML elements alongside a canvas. This was one of the first follies of my “everything goes in canvas” attitude. If you want a control box in your application, don't reinvent the partially transparent box with paths and fills followed by mouse event handling over your canvas (actually covered in Chapter 10) – simply use an HTML div and CSS to position it over your canvas. Geary shows how to do this and would have saved me a lot of time. Geary discusses and shows how to manage off-screen canvases (invisible canvases) in the browser which comes in mighty handy when boosting performance in HTML5. The final parts of Chapter One focus on remedial math and how to correctly handle units of measure when working in the browser.
Chapter Four was an eye opener on images, video and their manipulation in canvas. The first revelation was that drawImage() can also render another canvas or even a video frame into the current canvas. The API name was not indicative to me but after reading this chapter, it became apparent that if I sat down and created a layout of my game's surface, I could render groups of images into one off-screen canvas and then continually insert that canvas into view with drawImage(). This saved me from considerable rerendering calls. The author also included some drag and drop sugar in this chapter. The book helped me understand that sometimes there are both legacy calls to old ways of doing things and also multiple new ways to accomplish the same goal. When you’re trying to develop something as heavy as a game, there are a lot of pitfalls.
Chapter Five concentrates on animations in HTML5 and first and foremost identifies a problem I had struggled with in writing a game: don’t use setInterval() or setTimeout() for animations. These are imprecise and instead the book guides the reader with instructions on letting the browser select the frame rate. Being a novice, the underlying concepts of requestAnimationFrame() had eluded me prior to reading this book. Geary’s treatment of discussing each browser’s nuances with this method may someday be dated text but helped me understand why the API call is so vital. It also helps you build workarounds for each browser if you need them. Blitting was also a new concept to me as was the tactic of double buffering (which the browser already does to canvas). This chapter is heavy on the hidden caveats to animation in the browser and builds on these to implement parallax and a stopwatch. The end of this chapter has a number of particularly useful “best practices” that I now see as crucial in HTML5 game development.
Chapter Six details sprites and sprite sheets. Here the author gives us a brief introduction to design patterns (notably Strategy, Command and Flyweight) but it’s curious that this isn’t persisted throughout the text. This chapter covers painters in good detail and again how to implement motion and timed animation via sprites with requestNextAnimationFrame(). This chapter does a great job of showing how to quickly animate a spritesheet.
Chapter Seven gives the user a brief introduction to implementing simple physics in a game engine like gravity and friction. It’s actually just enough to move forward with the upcoming games but the most useful section of this chapter to me was how to warp time. While this motion looks intuitive, it was refreshing to see the math behind ease-in or ease-out effects. These simple touches look beautiful in canvas applications and critical, of course, in modeling realistic motion.
Naturally the next thing needed for a game is collision detection and Chapter Eight scratches the surface just enough to build our simple games. A lot of fundamental concepts are discussed like collision detection before or after the collision happens. Geary does a nice job of biting off just enough to chew from the strategies of ray casting, the separating axis theorem (SAT) and minimum translation vector algorithms for detecting collisions. Being a novice to collision detection, SAT was a new concept to me and I enjoyed Geary’s illustrations of the lines perpendicular to the normal vectors on polygons. This chapter did a great job of visualizing what the code was achieving. The last thing this chapter tackles is how to react or bounce off during a collision. It provided enough for the games but it seemed like an afterthought to collision detection. Isn’t there a possibility of spin on the object that could influence a bounce? These sort of questions didn’t appear in the text.
And Chapter Nine gets to the main focus of this book: writing the actual game with all our prior accumulated knowledge. Geary calls this light game engine “the ungame” and adds things like multitrack sound, keyboard event handling and how to implement a heads-up display to our repertoire. This chapter is very code heavy and it confuses me why Geary prints comments inlined in the code when he has a full book format to publish his words in. The ungame was called as such because it put together a lot of elements of the game but it was still sort of missing the basic play elements. Geary then starts in on implementing a pinball game. It may sound overly complicated for a learning text but as each piece of the puzzle is broken down, the author manages to describe and explain it fairly concisely. While this section could use more description, it is basically just bringing together and applying our prior concepts like emulating physics and implementing realistic motion. The pinball board is merely polygons and our code there to detect collisions with the circle that is the ball. It was surprisingly how quickly a pinball game came together.
Chapter Ten takes a look at making custom controls (as mentioned earlier about trying to use HTML when possible). From progress bars to image panners, this chapter was interesting and I really enjoyed the way the author showed how to componentize and reuse these controls and their parts. There’s really not a lot to say about this chapter, as you may imagine a lot of already covered components are implemented in achieving these controls and effects.
Geary recognizes HTML5’s alluring potential of being a common platform for developing applications and games across desktops and mobile devices. In the final chapter of the book, he covers briefly the ins and outs of developing for mobile — hopefully without having to force your users to a completely different experience. I did not realize that native looking apps could be achieved on mobile devices with HTML5 but even with that trick up its sleeve, it’s hard to imagine it becoming the de facto standard for all applications. Geary appears to be hopeful and does a good job of getting the developer thinking about the viewport and how the components of their canvas are going to be viewed from each device. Most importantly, it’s discussed how to handle different kinds of input or even display a touch keyboard above your game for alphabetic input.
This was a delightful book that will help readers understand the finer points of developing games in HTML5’s canvas element. While it doesn’t get you to the point of developing three dimensional blockbuster games inside the browser, it does bite off a very manageable chunk for most readers. And, if you’re a developer looking to get into HTML5 game design, I heavily recommend this text as an introduction."
Link to Original Source
But this is coming from someone who's probably going to see Frances Ha tonight and is still trying to get his hands on a copy of Incendies so if you want to laugh and don't want to have to think
I have two resumes in front of me. I need someone who can write some fairly complicated software. Are they writing the kernel to an operating system? No. But they'll be making complexity decisions between a server and a client. Not exactly new or novel but important to me and my clients.
So I look at one resume and the guy has suffered through integration by parts, linear algebra, differential equations and maybe even abstract algebra. The other guy went to a programming trade school where those are not taught. The trade school likely taught inheritance, pointers, typecasting, and all that good stuff just like the Bachelor's of Science degree would.
Now do my solutions need integration by parts, linear algebra and differential equations? Absolutely not. But if I'm going to pick between the two, I'm going to take the applicant that solved more difficult problems in order to make it to a class. Few people actually care about those concepts deep in their hearts -- and I'm sure neither of my prospective employees did. But in that same vein, no rational developer is going to care at all that my client likes to be able to drag and drop files instead of doing file navigation to find the files he wants. But I want the applicant who's going to do the inane stuff that he doesn't personally view as important.
Challenge yourself. Take the math courses. Take the logic courses. Take the statistics and combinatorics courses. Take the finite automata courses. Prove to yourself that there are no obstacles in your way. They are a great expense of time now but they are a huge investment in yourself -- no matter how pointless they appear to you.
If I had understood what I was doing, maybe I wouldn't mind so much.
You should attack this problem two different ways: 1) increase the amount of time you allot to your own personal enrichment in these topics/courses (three hours is very little time if you are approaching new concepts in math) and 2) seek outside instruction as it's also possible you have a professor who doesn't understand what they're doing either (the teaching, not the subject matter).
The more I read about what these guys were doing--and I mean the stuff they've admitted to, not just been accused of--the more I think they are getting what they deserve. Breaking into someone's network to get at information that the public should know is political. Breaking into someones network and racking up charges on personal credit card numbers is criminal. They're like the idiots that smash store windows during street protests.
I agree they are not the good guys. But I also think it's important to mete out justice based on who was doing what. I hope in street protests when windows are smashed that the vandals are correctly identified and brought to justice. Similarly, I hope they find who are responsible for the credit card thefts but it appears Hammond is not and there are reports he did not benefit personally from this intrusion:
Barrett Brown of Dallas, Texas is expected to stand trial starting this September for a number of charges, including one relating to the release of Stratfor subscribers’ credit card numbers. He faces a maximum of 100 years in prison.
Switching to GIMP, my productivity is about to go through the roof!
It's not about productivity, it's about economic impact. The article is kind of tongue in cheek poking fun of BSA's erroneous numbers manipulation to show that "properly licensed software" contributes oh so much to the economy. For clear reasons, your switch to GIMP from (presumably) a proprietary software alternative wouldn't move you from one column to the other unless you were to somehow pirate GIMP. While pirating GIMP is possible, you'd like just install it legally by downloading it with references to the GPLv3 license. Whether or not you believe it, GIMP with a copy of the GPLv3 is actually properly licensed software -- putting it in the column of the nebulous cloud of software that the BSA claims inflates our world economy to staggering heights.
To try to quantify the "productivity" of GIMP versus something else like photoshop would likely be subjective, nebulous and not 1 to 1. This isn't about productivity, it's about piracy. The author is pointing out how much of the mad moneys comes from open source software and all but accuses the BSA of co-opting that figure to appear to be their own work.
Link to Original Source
We have reverse-osmosis filtering system on the water source for the humidifiers for the environmental chambers in the test lab at work. It's not unknown technology. The old-fashioned alternative is a still.
Are these breweries currently using unfiltered, unpurified water?
As someone who consumes large amounts of beer, there are salts and minerals that exist in the water that come from certain aquifers that are actually desired to be in place for the beer and can have a negative or positive effect on the yeast. An adequate amount of calcium, magnesium, and zinc is necessary for some of the yeast’s metabolic paths. I believe most brewers add in these things to aid the yeast as much or as little as they want but I am almost certain that RO would completely remove any of this out of the water along with anything bad.
This becomes especially apparent when a very large brewery like Anheuser-Busch or SABMiller buys out a smaller brewery like Leinenkugel's and moves production from Wisconsin to Missouri or where ever it is most convenient for their supply lines. Often they keep the same formula, make little adjustments to it and rely on brand loyalty. And as someone who has consumed vast amounts of Leinies in Chippewa Falls, WI and also on the east coast, I can tell you right now that Leinies out here tastes like shit and I'd much prefer Yuengling, Troegs or any of the more local breweries.
And my suspicions are that they take shit water, put it through RO and don't or can't make proper adjustments to add sconnie minerals resulting in an inferior product. Don't get me wrong, I love RO water. I worked at restaurant that only served triple reverse osmosis water and then added some salts and minerals post process and holy hell that was the most refreshing thing I've ever drank. But these breweries are operating on top of hundreds of years of adjustments to their local aquifers and just asking them to insert RO water into their process is probably harder said than done.
What I feel sorry for is any researcher who wants to do some genuine research into cold fusion.
The trick is that you don't put your conclusion before your hypothesis. "Cold fusion" is the conclusion, or the result, of the whole process that would result in your utopian revolutions (again, something that is post conclusion or desired symptoms of the result of this sort of research). When your research begins by you working backwards, that's when the red flags should go up because there is no logical way to work backwards. Sometimes a sci-fi author will imagine something but it takes a very talented scientist/research/inventor/engineer/whatever to go from hypothesis to that end construct -- even then there's often a slight catch or permutation of nonfiction idea.
What this paper appears to do is formalize observations
The fear is that Rossi stumbled upon a neat trick that is just not sustainable but he realizes that if he controls the parameters on the experiments, he can make it look like this thing works. Then he rakes in billions and walks away from any involvement in it. It is suspicious because it's being conducted at a university that should be making obvious logical steps forward. Yet we continually only see "demonstrations" like his "public displays" and "observations" like this paper.
My charges are still borderline character assassination/ad hominem and this could very well work. But I've had enough talk of what is "perceived to happen" and I'm afraid that someone has a really neat trick that they've already thoroughly investigated and figured out why it works. And maybe it even fooled them in the beginning. But truly there is no good way to monetize this trick. So they give everyone else only enough information to make them think that it works. Then they capitalize on this public interest and walk away from it just before the reveal.
If not, I apologize but I also wouldn't be buying into this idea until we start with a hypothesis and tests are reproduced around the world and the true reason behind this anomaly is well understood and indeed a good energy answer. It's totally possible he doesn't know yet and his greed is the reason we only get tastes of this device. If that's true, however, we still don't know if it's a good answer to our energy addiction.
I only hope there are enough details in this paper for other researchers around the world to better reproduce and analyze these results. I'm sorry if this is just a matter of an ill-equipped laboratory at Bologna University but with all the interest this has generated, I would be surprised if that was reason.
In conclusion, start with a hypothesis, openly publish your methods and results. Wait for others to reproduce. Your rigor and its results will be your vindication if you fear being attacked for doing research. Just don't start your research by saying, "I'm going to make cold fusion and cheap energy is just ten years away." That's when you're openly attacked for good reason -- that's not science, those are words that you spout to get money.
Lee added that the Scripps Hackers eventually used Wget to find and download "the Companies' confidential files." (Wget was the same tool used by Facebook's Mark Zuckerberg in the film The Social Network to collect student photos from various Harvard University directories.) The rest of the letter pretty much blamed the "Scripps Hackers" for the cost of breach notifications, demanded Scripps hand over all evidence as well as the identity and intentions of the hackers, before warning that Scripps will be sued.
Folks, there was a big bad security breach. Now, *adjusts his massive belt buckle* we're investigating this like we would any other serious crime. And right now we're just trying to identify weapons used in this heinous attack. Now, we've discovered that the hackers were using a very vicious mechanism in this attack. In a murder, you might find a revolver used to put two bullets into the back of a poor old defenseless lady's skull in order to get all her coupons and a couple of Indian head pennies out of her purse. Or perhaps in a pedophile case, you'll find the "secret candy" that was used to lure the children into a white panel van with painted over windows.
*expels a long tortured sigh*
Well, I gotta say, in my thirty years on the force, I wish we were only dealing with something like that today, honest to God Almighty I really do. Instead this artifact was discovered at the scene of the crime. Now, I'm not asking you to understand that -- hell, I'd warn you against even openin' up your browser to the devil's toolbox. But let me, a trained law enforcement professional, take the time to explain the gruesome evidence just one HTTP request away from you and your chillun'. The page is black. Black as a moonless night sky when raptors swoop from the murky inky nothing to take your kids and livestock back up with them silently. On it is a bunch of white text that makes no sense to any God fearun' man on this here Earth. That's what they call a "man page" probably because it is the ultimate culmination of man's sin and lo and behold it displays a guide to exact torture on innocent web servers across this great and holy internet.
Even if you want to use this "man page" for WGET to learn how to use Satan's server scythe, you would have to read through almost twenty pages of incomprehensible technobabble like what that kraut over in Cali -- the one who took his wife's life -- spoke. And if you want to just see an example, it's not at the top! No, why, it's all the way down at the bottom. For this one, they don't even have examples. Just enough options to kill a man. Probably gave Steve Jobs cancer, they never proved all these options in these pages didn't. Buried in the mud of a thousand evils lie more evils.
And why, oh why are we even wasting taxpayer money on these Scripps Journos? Who needs a trial when the evidence is in the tools they used? Folks, I think it's time we WGET one last thing, I'll WGET a rope and you WGET your pitchforks and torches
Book Review: The Plateau Effect: Getting From Stuck To Success is identical to this Amazon review.
Book Review: The Death of the Internet is identical to this Amazon review.
Book Review: Everyday Cryptography is identical to this Amazon review.
Book Review: Liars and Outliers is identical to this Amazon Review.
It just keeps going
I post this having not read a single page of this book. I was interested in getting this book for my attorney wife. When looking at it on AMAZON.COM, I noticed that the post here is a copy of only ONE of TWO reviews the book has on Amazon.com. The other review is HORRIBLE. http://www.amazon.com/Locked-Down-Information-Security-Lawyers/product-reviews/1614383642/ref=cm_cr_dp_qt_hist_one?ie=UTF8&filterBy=addOneStar&showViewpoints=0 Read/order with caution.
As someone who occasionally writes reviews for Slashdot (and usually reads all of the ones posted), this is a clear violation of the book review guidelines:
First, an important one: by submitting your review to Slashdot, you represent that the review is your own work, that it is original to Slashdot, and that it is unencumbered by any existing or anticipated contractual relationship; further, you are granting Slashdot permission to publish your review, including any editing the Slashdot editorial team finds necessary and appropriate. (Major edits will involve consultation by email or other means.) If you've reviewed the book elsewhere anywhere besides a personal home page (for instance, on Amazon) please be sure that your review for Slashdot is substantially different.
(emphasis mine) There is no difference that I can see