I already have a device which records my discussions with my wife.
I'm presuming the device your talking about is your wife? Marriage. Ain't it wonderful?
You're confusing Debt/Asset and Deficit/Surplus. Debt is what we owe, Asset is what we Own free and clear. Deficit is what we are spending above what revenues are. Surplus is the amount of revenue above spending. Clinton, while raising the amount of Debt the country had, was able to get Congress to pass a budget that generated a Surplus. Over time, if the Surplus was maintained, it would have started eating up the Debt and hopefully may have eventually turned it into an Asset. Unfortunately, Bush had turned that all around in his first term and began marching up the Deficit again...thus increasing the Debt. At least Obama has reduced the Deficit, even though it is still raising the debt, just not as fast as it was when he started. You may ask, how can Obama be spending more but reduce the deficit? Simply because he didn't increase spending at a faster rate than the increase in GDP.
Sources:
Your PCs would all still have GBE speed with each other since they're going to be physically connected through a GBE switch or several, depending on the number of your physical machines. The only place where you'd need wireless is between the Router that would be plugging into the DSL modem and the Access point / Wireless Bridge, which can be a unit that plugs directly into a surge Protector/Lightning Suppressor and has an Ethernet port that can run a cable to one of your GBE switches. No additional Wireless cards are needed as the Access Point would be the wireless card for the entire network. Also, if you get a model of Router and a model of Access Point that has removable antennae, you can swap the area antennae with a directional system. Using the directional system will improve your signal strength, which will in turn ensure that your Access Point/Bridge and Router are connecting at the maximum speed possible. It will also help to improve security of the connection on top of using WPA2 encryption due to being a very focused beam of radio waves instead of a spherical area. As already noted, this will also provide an Air Gap between the phone line equipment and your more expensive computer equipment, reducing the number of attack vectors a lightning strike can take to get to your more expensive machines.
If you're worried about loosing the GBE file transfers to remote servers through your DSL modem, don't. Your LAN is still GBE with all the computers still running at GBE speed with each other. Depending on if you get Wireless N or Wireless AC, you're going to have 300Mb/s to 800Mb/s link speeds respectively over the wireless. All of this is still going to be faster than whatever your ISP gives you (average ~3-7Mb/s on DSL, ~10-30Mb/s on Cable, give or take) so you never had GBE transfer over the internet... unless you're that lucky grandma in Europe (Sweden, I think?) that's getting 40Gb/s.
...enjoy some novel food, make new acquaintances that they wouldn't normally meet (i.e., networking), see old friends, and generally enjoy being alive...
Ahhhhh... reminds me of the good old days of Chess Club, Anime Club, and the good ole FFA.
There's more to life than the gladiatorial combat of men tossing balls up and down a field to the cheers of the masses.
It's about 8 Saturdays per year
Ok, 8 Saturdays of limited access to academic facilities. That's 8 Saturdays across 2 semesters, starting right about finals time of fall semester, where an academic who's going to school full time and working full time needs to do lab-work for his degree that happens to have weekly projects due every Monday, and winds up having to do a three mile hike to get to the science building, because the parking lot right next to it also happens to be the one closed off for game use. Oh... let's not mention that it's a three mile hike carrying a notebook, at least a few books, and potentially additional equipment. There's not much room for time Management and working around game schedules when your own workload is scheduled and enforced by other people who have no time for forgiveness.
Have your sports but keep them away from my path... because I now have the power that I can and will steamroll the field into a new parking lot for the science building because I too no longer have time for forgiveness.
Read the summary and you'll see that we're not talking about a high powered audio system; We're talking about built-in Laptop audio. I have yet to see a modern laptop with a hardware volume control potentiometer (volume knob) that controls the voltage going to the speakers. Instead, the manufactures hard wire the volume to a specific resistance so these speakers are running at a maximum limit 100% of the time that they are turned on. Understand that I'm not saying the manufacturers are having the speaker system running at full bore 100% of the time, but they're being hardware limited to what is supposed to be a safe level below that to ensure that the speakers don't blow themselves out. On these systems, the volume control buttons will operate the Volume Control applet of the Operating System to control the loudness within the range from mute to the hardware limited maximum.
In this way it would be theoretically possible, if the volume control hardware is defective in design and the Volume Control applet had the right vulnerabilities, to be able to build a malicious web page that could crank up the volume on the laptop and generate a waveform that would damage the user's speakers.
This leads me in the belief that Dell is covering up a defect in the laptop speaker control design to avoid having to perform a costly recall on the affected models.
I hope I can make the links you need to see the logic that other posters have been attempting to convey.
With non-sponsored open source software in most cases the developer is creating the software mostly for their own personal understanding of the concepts of software design or to develop their own personal tools for whatever other personal projects they are working on. At this level the software is unpolished, providing only enough of an interface for a single person to utilize. You can start to have other developers joining the project for their own personal growth and understanding. An interface design major might put a better GUI front end to the project. Someone who needs to learn about database design might put more robust SQL statements into the code, or make the code behave better with the APIs of a certain database engine. Regardless of who comes in and adds to the project, with non-sponsored open source, there is no vested interest beyond what a person puts in for their own personal growth and understanding. Also, everything with the development of an application is a serious might. At this level, it's not very often that people training in QA have any interest in getting involved as college classes on System Analysis tend to focus on Use Cases and hardly ever suggest students go out and find an open source project to test. Not that they would get anywhere with the egos that tend to be involved in many of the non-sponsored open-source projects. Coders at this level can, at times, have very strong emotional attachments to their code and take a criticism as a challenge to duel.
The next level is sponsored open source. At this level, you're going to have people putting money into a project to help make it as good as it can be. This incentives developers into producing cleaner code with better functionality that would appease a more wide spread audience. As the project grows and gets more monetary involvement a lead developer may look over the system he's building as a whole and try to identify areas that need to be improved and features that would be good to add. For this he may start looking to hire on people to start fulfilling the roles where he isn't necessarily the best fit in the interest of putting out a better project. Depending on the project lead's ability to take criticisms in the face of this goal, he may actually hire on some Analysts to perform QA and testing on the project to help it appeal to a wider audience as well as some form of tech support to help the end users in utilizing the application.
The last level is "sponsored" closed source (quotes denote redundancy). If a person or business is developing closed source, it's because they live or die by the code they're writing, and they have every intention on living by it. This code is either going to be in a product that is to be distributed to end users, or it is an internal application that is going to handle some aspect of internal business processes. In the former case it absolutely has to be easy to use, well documented, well tested, patched and maintained, and have support teams available to help the end user troubleshoot any problems that may be cropping up. If any part of this does not live to some kind of market standard, the product will fail. In the latter case much of the same philosophies apply with the addition of mission critical reliability. If internal software is regularly choking and losing customer records to a buffer overflow three times a week one can assume that if it isn't fixed in short order the entire business will very shortly wind up on the rocks. For both of these cases, companies that develop closed source software will set up an entire department of Quality Assurance and Testers to make sure that what gets put into production is the absolute best it can be.
I'm sure there may be some other aspects of these three tiers that I have completely missed the mark on, but this is what comes to mind the quickest for explaining the reasons why closed source software has the greater likelihood of being higher quality applications
If Machiavelli were a hacker, he'd have worked for the CSSG. -- Phil Lapsley